16:19:04 Started by upstream project "policy-docker-master-merge-java" build number 338 16:19:04 originally caused by: 16:19:04 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137338 16:19:04 Running as SYSTEM 16:19:04 [EnvInject] - Loading node environment variables. 16:19:04 Building remotely on prd-ubuntu1804-docker-8c-8g-7437 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 16:19:04 [ssh-agent] Looking for ssh-agent implementation... 16:19:04 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 16:19:04 $ ssh-agent 16:19:04 SSH_AUTH_SOCK=/tmp/ssh-0MCKSVfhV3YM/agent.2074 16:19:04 SSH_AGENT_PID=2076 16:19:04 [ssh-agent] Started. 16:19:04 Running ssh-add (command line suppressed) 16:19:04 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_17498827250916214510.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_17498827250916214510.key) 16:19:04 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 16:19:04 The recommended git tool is: NONE 16:19:06 using credential onap-jenkins-ssh 16:19:06 Wiping out workspace first. 16:19:06 Cloning the remote Git repository 16:19:06 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 16:19:06 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 16:19:06 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 16:19:06 > git --version # timeout=10 16:19:06 > git --version # 'git version 2.17.1' 16:19:06 using GIT_SSH to set credentials Gerrit user 16:19:06 Verifying host key using manually-configured host key entries 16:19:06 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 16:19:07 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 16:19:07 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 16:19:07 Avoid second fetch 16:19:07 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 16:19:07 Checking out Revision 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 (refs/remotes/origin/master) 16:19:07 > git config core.sparsecheckout # timeout=10 16:19:07 > git checkout -f 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=30 16:19:08 Commit message: "Fix config files removing hibernate deprecated properties and changing robot deprecated commands in test files" 16:19:08 > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 16:19:08 provisioning config files... 16:19:08 copy managed file [npmrc] to file:/home/jenkins/.npmrc 16:19:08 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 16:19:08 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5285218547975818768.sh 16:19:08 ---> python-tools-install.sh 16:19:08 Setup pyenv: 16:19:08 * system (set by /opt/pyenv/version) 16:19:08 * 3.8.13 (set by /opt/pyenv/version) 16:19:08 * 3.9.13 (set by /opt/pyenv/version) 16:19:08 * 3.10.6 (set by /opt/pyenv/version) 16:19:14 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-X2wi 16:19:14 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 16:19:17 lf-activate-venv(): INFO: Installing: lftools 16:19:58 lf-activate-venv(): INFO: Adding /tmp/venv-X2wi/bin to PATH 16:19:58 Generating Requirements File 16:20:36 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 16:20:36 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 16:20:36 Python 3.10.6 16:20:37 pip 24.0 from /tmp/venv-X2wi/lib/python3.10/site-packages/pip (python 3.10) 16:20:37 appdirs==1.4.4 16:20:37 argcomplete==3.2.2 16:20:37 aspy.yaml==1.3.0 16:20:37 attrs==23.2.0 16:20:37 autopage==0.5.2 16:20:37 beautifulsoup4==4.12.3 16:20:37 boto3==1.34.46 16:20:37 botocore==1.34.46 16:20:37 bs4==0.0.2 16:20:37 cachetools==5.3.2 16:20:37 certifi==2024.2.2 16:20:37 cffi==1.16.0 16:20:37 cfgv==3.4.0 16:20:37 chardet==5.2.0 16:20:37 charset-normalizer==3.3.2 16:20:37 click==8.1.7 16:20:37 cliff==4.5.0 16:20:37 cmd2==2.4.3 16:20:37 cryptography==3.3.2 16:20:37 debtcollector==2.5.0 16:20:37 decorator==5.1.1 16:20:37 defusedxml==0.7.1 16:20:37 Deprecated==1.2.14 16:20:37 distlib==0.3.8 16:20:37 dnspython==2.6.1 16:20:37 docker==4.2.2 16:20:37 dogpile.cache==1.3.1 16:20:37 email-validator==2.1.0.post1 16:20:37 filelock==3.13.1 16:20:37 future==1.0.0 16:20:37 gitdb==4.0.11 16:20:37 GitPython==3.1.42 16:20:37 google-auth==2.28.0 16:20:37 httplib2==0.22.0 16:20:37 identify==2.5.35 16:20:37 idna==3.6 16:20:37 importlib-resources==1.5.0 16:20:37 iso8601==2.1.0 16:20:37 Jinja2==3.1.3 16:20:37 jmespath==1.0.1 16:20:37 jsonpatch==1.33 16:20:37 jsonpointer==2.4 16:20:37 jsonschema==4.21.1 16:20:37 jsonschema-specifications==2023.12.1 16:20:37 keystoneauth1==5.5.0 16:20:37 kubernetes==29.0.0 16:20:37 lftools==0.37.9 16:20:37 lxml==5.1.0 16:20:37 MarkupSafe==2.1.5 16:20:37 msgpack==1.0.7 16:20:37 multi_key_dict==2.0.3 16:20:37 munch==4.0.0 16:20:37 netaddr==1.2.1 16:20:37 netifaces==0.11.0 16:20:37 niet==1.4.2 16:20:37 nodeenv==1.8.0 16:20:37 oauth2client==4.1.3 16:20:37 oauthlib==3.2.2 16:20:37 openstacksdk==0.62.0 16:20:37 os-client-config==2.1.0 16:20:37 os-service-types==1.7.0 16:20:37 osc-lib==3.0.0 16:20:37 oslo.config==9.3.0 16:20:37 oslo.context==5.3.0 16:20:37 oslo.i18n==6.2.0 16:20:37 oslo.log==5.4.0 16:20:37 oslo.serialization==5.3.0 16:20:37 oslo.utils==7.0.0 16:20:37 packaging==23.2 16:20:37 pbr==6.0.0 16:20:37 platformdirs==4.2.0 16:20:37 prettytable==3.10.0 16:20:37 pyasn1==0.5.1 16:20:37 pyasn1-modules==0.3.0 16:20:37 pycparser==2.21 16:20:37 pygerrit2==2.0.15 16:20:37 PyGithub==2.2.0 16:20:37 pyinotify==0.9.6 16:20:37 PyJWT==2.8.0 16:20:37 PyNaCl==1.5.0 16:20:37 pyparsing==2.4.7 16:20:37 pyperclip==1.8.2 16:20:37 pyrsistent==0.20.0 16:20:37 python-cinderclient==9.4.0 16:20:37 python-dateutil==2.8.2 16:20:37 python-heatclient==3.4.0 16:20:37 python-jenkins==1.8.2 16:20:37 python-keystoneclient==5.3.0 16:20:37 python-magnumclient==4.3.0 16:20:37 python-novaclient==18.4.0 16:20:37 python-openstackclient==6.0.1 16:20:37 python-swiftclient==4.4.0 16:20:37 pytz==2024.1 16:20:37 PyYAML==6.0.1 16:20:37 referencing==0.33.0 16:20:37 requests==2.31.0 16:20:37 requests-oauthlib==1.3.1 16:20:37 requestsexceptions==1.4.0 16:20:37 rfc3986==2.0.0 16:20:37 rpds-py==0.18.0 16:20:37 rsa==4.9 16:20:37 ruamel.yaml==0.18.6 16:20:37 ruamel.yaml.clib==0.2.8 16:20:37 s3transfer==0.10.0 16:20:37 simplejson==3.19.2 16:20:37 six==1.16.0 16:20:37 smmap==5.0.1 16:20:37 soupsieve==2.5 16:20:37 stevedore==5.1.0 16:20:37 tabulate==0.9.0 16:20:37 toml==0.10.2 16:20:37 tomlkit==0.12.3 16:20:37 tqdm==4.66.2 16:20:37 typing_extensions==4.9.0 16:20:37 tzdata==2024.1 16:20:37 urllib3==1.26.18 16:20:37 virtualenv==20.25.0 16:20:37 wcwidth==0.2.13 16:20:37 websocket-client==1.7.0 16:20:37 wrapt==1.16.0 16:20:37 xdg==6.0.0 16:20:37 xmltodict==0.13.0 16:20:37 yq==3.2.3 16:20:37 [EnvInject] - Injecting environment variables from a build step. 16:20:37 [EnvInject] - Injecting as environment variables the properties content 16:20:37 SET_JDK_VERSION=openjdk17 16:20:37 GIT_URL="git://cloud.onap.org/mirror" 16:20:37 16:20:37 [EnvInject] - Variables injected successfully. 16:20:37 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins8008838258436189935.sh 16:20:37 ---> update-java-alternatives.sh 16:20:37 ---> Updating Java version 16:20:37 ---> Ubuntu/Debian system detected 16:20:37 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 16:20:37 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 16:20:37 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 16:20:38 openjdk version "17.0.4" 2022-07-19 16:20:38 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 16:20:38 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 16:20:38 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 16:20:38 [EnvInject] - Injecting environment variables from a build step. 16:20:38 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 16:20:38 [EnvInject] - Variables injected successfully. 16:20:38 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins11891529965455517326.sh 16:20:38 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 16:20:38 + set +u 16:20:38 + save_set 16:20:38 + RUN_CSIT_SAVE_SET=ehxB 16:20:38 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 16:20:38 + '[' 1 -eq 0 ']' 16:20:38 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 16:20:38 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:20:38 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:20:38 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 16:20:38 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 16:20:38 + export ROBOT_VARIABLES= 16:20:38 + ROBOT_VARIABLES= 16:20:38 + export PROJECT=pap 16:20:38 + PROJECT=pap 16:20:38 + cd /w/workspace/policy-pap-master-project-csit-pap 16:20:38 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 16:20:38 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 16:20:38 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 16:20:38 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 16:20:38 + relax_set 16:20:38 + set +e 16:20:38 + set +o pipefail 16:20:38 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 16:20:38 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 16:20:38 +++ mktemp -d 16:20:38 ++ ROBOT_VENV=/tmp/tmp.L1UVsev23Q 16:20:38 ++ echo ROBOT_VENV=/tmp/tmp.L1UVsev23Q 16:20:38 +++ python3 --version 16:20:38 ++ echo 'Python version is: Python 3.6.9' 16:20:38 Python version is: Python 3.6.9 16:20:38 ++ python3 -m venv --clear /tmp/tmp.L1UVsev23Q 16:20:39 ++ source /tmp/tmp.L1UVsev23Q/bin/activate 16:20:39 +++ deactivate nondestructive 16:20:39 +++ '[' -n '' ']' 16:20:39 +++ '[' -n '' ']' 16:20:39 +++ '[' -n /bin/bash -o -n '' ']' 16:20:39 +++ hash -r 16:20:39 +++ '[' -n '' ']' 16:20:39 +++ unset VIRTUAL_ENV 16:20:39 +++ '[' '!' nondestructive = nondestructive ']' 16:20:39 +++ VIRTUAL_ENV=/tmp/tmp.L1UVsev23Q 16:20:39 +++ export VIRTUAL_ENV 16:20:39 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:20:39 +++ PATH=/tmp/tmp.L1UVsev23Q/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:20:39 +++ export PATH 16:20:39 +++ '[' -n '' ']' 16:20:39 +++ '[' -z '' ']' 16:20:39 +++ _OLD_VIRTUAL_PS1= 16:20:39 +++ '[' 'x(tmp.L1UVsev23Q) ' '!=' x ']' 16:20:39 +++ PS1='(tmp.L1UVsev23Q) ' 16:20:39 +++ export PS1 16:20:39 +++ '[' -n /bin/bash -o -n '' ']' 16:20:39 +++ hash -r 16:20:39 ++ set -exu 16:20:39 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 16:20:43 ++ echo 'Installing Python Requirements' 16:20:43 Installing Python Requirements 16:20:43 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 16:21:06 ++ python3 -m pip -qq freeze 16:21:06 bcrypt==4.0.1 16:21:06 beautifulsoup4==4.12.3 16:21:06 bitarray==2.9.2 16:21:06 certifi==2024.2.2 16:21:06 cffi==1.15.1 16:21:06 charset-normalizer==2.0.12 16:21:06 cryptography==40.0.2 16:21:06 decorator==5.1.1 16:21:06 elasticsearch==7.17.9 16:21:06 elasticsearch-dsl==7.4.1 16:21:06 enum34==1.1.10 16:21:06 idna==3.6 16:21:06 importlib-resources==5.4.0 16:21:06 ipaddr==2.2.0 16:21:06 isodate==0.6.1 16:21:06 jmespath==0.10.0 16:21:06 jsonpatch==1.32 16:21:06 jsonpath-rw==1.4.0 16:21:06 jsonpointer==2.3 16:21:06 lxml==5.1.0 16:21:06 netaddr==0.8.0 16:21:06 netifaces==0.11.0 16:21:06 odltools==0.1.28 16:21:06 paramiko==3.4.0 16:21:06 pkg_resources==0.0.0 16:21:06 ply==3.11 16:21:06 pyang==2.6.0 16:21:06 pyangbind==0.8.1 16:21:06 pycparser==2.21 16:21:06 pyhocon==0.3.60 16:21:06 PyNaCl==1.5.0 16:21:06 pyparsing==3.1.1 16:21:06 python-dateutil==2.8.2 16:21:06 regex==2023.8.8 16:21:06 requests==2.27.1 16:21:06 robotframework==6.1.1 16:21:06 robotframework-httplibrary==0.4.2 16:21:06 robotframework-pythonlibcore==3.0.0 16:21:06 robotframework-requests==0.9.4 16:21:06 robotframework-selenium2library==3.0.0 16:21:06 robotframework-seleniumlibrary==5.1.3 16:21:06 robotframework-sshlibrary==3.8.0 16:21:06 scapy==2.5.0 16:21:06 scp==0.14.5 16:21:06 selenium==3.141.0 16:21:06 six==1.16.0 16:21:06 soupsieve==2.3.2.post1 16:21:06 urllib3==1.26.18 16:21:06 waitress==2.0.0 16:21:06 WebOb==1.8.7 16:21:06 WebTest==3.0.0 16:21:06 zipp==3.6.0 16:21:06 ++ mkdir -p /tmp/tmp.L1UVsev23Q/src/onap 16:21:06 ++ rm -rf /tmp/tmp.L1UVsev23Q/src/onap/testsuite 16:21:06 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 16:21:12 ++ echo 'Installing python confluent-kafka library' 16:21:12 Installing python confluent-kafka library 16:21:12 ++ python3 -m pip install -qq confluent-kafka 16:21:13 ++ echo 'Uninstall docker-py and reinstall docker.' 16:21:13 Uninstall docker-py and reinstall docker. 16:21:13 ++ python3 -m pip uninstall -y -qq docker 16:21:14 ++ python3 -m pip install -U -qq docker 16:21:15 ++ python3 -m pip -qq freeze 16:21:16 bcrypt==4.0.1 16:21:16 beautifulsoup4==4.12.3 16:21:16 bitarray==2.9.2 16:21:16 certifi==2024.2.2 16:21:16 cffi==1.15.1 16:21:16 charset-normalizer==2.0.12 16:21:16 confluent-kafka==2.3.0 16:21:16 cryptography==40.0.2 16:21:16 decorator==5.1.1 16:21:16 deepdiff==5.7.0 16:21:16 dnspython==2.2.1 16:21:16 docker==5.0.3 16:21:16 elasticsearch==7.17.9 16:21:16 elasticsearch-dsl==7.4.1 16:21:16 enum34==1.1.10 16:21:16 future==1.0.0 16:21:16 idna==3.6 16:21:16 importlib-resources==5.4.0 16:21:16 ipaddr==2.2.0 16:21:16 isodate==0.6.1 16:21:16 Jinja2==3.0.3 16:21:16 jmespath==0.10.0 16:21:16 jsonpatch==1.32 16:21:16 jsonpath-rw==1.4.0 16:21:16 jsonpointer==2.3 16:21:16 kafka-python==2.0.2 16:21:16 lxml==5.1.0 16:21:16 MarkupSafe==2.0.1 16:21:16 more-itertools==5.0.0 16:21:16 netaddr==0.8.0 16:21:16 netifaces==0.11.0 16:21:16 odltools==0.1.28 16:21:16 ordered-set==4.0.2 16:21:16 paramiko==3.4.0 16:21:16 pbr==6.0.0 16:21:16 pkg_resources==0.0.0 16:21:16 ply==3.11 16:21:16 protobuf==3.19.6 16:21:16 pyang==2.6.0 16:21:16 pyangbind==0.8.1 16:21:16 pycparser==2.21 16:21:16 pyhocon==0.3.60 16:21:16 PyNaCl==1.5.0 16:21:16 pyparsing==3.1.1 16:21:16 python-dateutil==2.8.2 16:21:16 PyYAML==6.0.1 16:21:16 regex==2023.8.8 16:21:16 requests==2.27.1 16:21:16 robotframework==6.1.1 16:21:16 robotframework-httplibrary==0.4.2 16:21:16 robotframework-onap==0.6.0.dev105 16:21:16 robotframework-pythonlibcore==3.0.0 16:21:16 robotframework-requests==0.9.4 16:21:16 robotframework-selenium2library==3.0.0 16:21:16 robotframework-seleniumlibrary==5.1.3 16:21:16 robotframework-sshlibrary==3.8.0 16:21:16 robotlibcore-temp==1.0.2 16:21:16 scapy==2.5.0 16:21:16 scp==0.14.5 16:21:16 selenium==3.141.0 16:21:16 six==1.16.0 16:21:16 soupsieve==2.3.2.post1 16:21:16 urllib3==1.26.18 16:21:16 waitress==2.0.0 16:21:16 WebOb==1.8.7 16:21:16 websocket-client==1.3.1 16:21:16 WebTest==3.0.0 16:21:16 zipp==3.6.0 16:21:16 ++ uname 16:21:16 ++ grep -q Linux 16:21:16 ++ sudo apt-get -y -qq install libxml2-utils 16:21:16 + load_set 16:21:16 + _setopts=ehuxB 16:21:16 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 16:21:16 ++ tr : ' ' 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o braceexpand 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o hashall 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o interactive-comments 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o nounset 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o xtrace 16:21:16 ++ echo ehuxB 16:21:16 ++ sed 's/./& /g' 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +e 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +h 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +u 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +x 16:21:16 + source_safely /tmp/tmp.L1UVsev23Q/bin/activate 16:21:16 + '[' -z /tmp/tmp.L1UVsev23Q/bin/activate ']' 16:21:16 + relax_set 16:21:16 + set +e 16:21:16 + set +o pipefail 16:21:16 + . /tmp/tmp.L1UVsev23Q/bin/activate 16:21:16 ++ deactivate nondestructive 16:21:16 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 16:21:16 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:21:16 ++ export PATH 16:21:16 ++ unset _OLD_VIRTUAL_PATH 16:21:16 ++ '[' -n '' ']' 16:21:16 ++ '[' -n /bin/bash -o -n '' ']' 16:21:16 ++ hash -r 16:21:16 ++ '[' -n '' ']' 16:21:16 ++ unset VIRTUAL_ENV 16:21:16 ++ '[' '!' nondestructive = nondestructive ']' 16:21:16 ++ VIRTUAL_ENV=/tmp/tmp.L1UVsev23Q 16:21:16 ++ export VIRTUAL_ENV 16:21:16 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:21:16 ++ PATH=/tmp/tmp.L1UVsev23Q/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 16:21:16 ++ export PATH 16:21:16 ++ '[' -n '' ']' 16:21:16 ++ '[' -z '' ']' 16:21:16 ++ _OLD_VIRTUAL_PS1='(tmp.L1UVsev23Q) ' 16:21:16 ++ '[' 'x(tmp.L1UVsev23Q) ' '!=' x ']' 16:21:16 ++ PS1='(tmp.L1UVsev23Q) (tmp.L1UVsev23Q) ' 16:21:16 ++ export PS1 16:21:16 ++ '[' -n /bin/bash -o -n '' ']' 16:21:16 ++ hash -r 16:21:16 + load_set 16:21:16 + _setopts=hxB 16:21:16 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:21:16 ++ tr : ' ' 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o braceexpand 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o hashall 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o interactive-comments 16:21:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:21:16 + set +o xtrace 16:21:16 ++ echo hxB 16:21:16 ++ sed 's/./& /g' 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +h 16:21:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:21:16 + set +x 16:21:16 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 16:21:16 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 16:21:16 + export TEST_OPTIONS= 16:21:16 + TEST_OPTIONS= 16:21:16 ++ mktemp -d 16:21:16 + WORKDIR=/tmp/tmp.aKPhtjj3Wq 16:21:16 + cd /tmp/tmp.aKPhtjj3Wq 16:21:16 + docker login -u docker -p docker nexus3.onap.org:10001 16:21:17 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 16:21:17 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 16:21:17 Configure a credential helper to remove this warning. See 16:21:17 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 16:21:17 16:21:17 Login Succeeded 16:21:17 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 16:21:17 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 16:21:17 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 16:21:17 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 16:21:17 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 16:21:17 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 16:21:17 + relax_set 16:21:17 + set +e 16:21:17 + set +o pipefail 16:21:17 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 16:21:17 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 16:21:17 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 16:21:17 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 16:21:17 +++ GERRIT_BRANCH=master 16:21:17 +++ echo GERRIT_BRANCH=master 16:21:17 GERRIT_BRANCH=master 16:21:17 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 16:21:17 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 16:21:17 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 16:21:17 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 16:21:18 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 16:21:18 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 16:21:18 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 16:21:18 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 16:21:18 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 16:21:18 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 16:21:18 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 16:21:18 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 16:21:18 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 16:21:18 +++ grafana=false 16:21:18 +++ gui=false 16:21:18 +++ [[ 2 -gt 0 ]] 16:21:18 +++ key=apex-pdp 16:21:18 +++ case $key in 16:21:18 +++ echo apex-pdp 16:21:18 apex-pdp 16:21:18 +++ component=apex-pdp 16:21:18 +++ shift 16:21:18 +++ [[ 1 -gt 0 ]] 16:21:18 +++ key=--grafana 16:21:18 +++ case $key in 16:21:18 +++ grafana=true 16:21:18 +++ shift 16:21:18 +++ [[ 0 -gt 0 ]] 16:21:18 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 16:21:18 +++ echo 'Configuring docker compose...' 16:21:18 Configuring docker compose... 16:21:18 +++ source export-ports.sh 16:21:18 +++ source get-versions.sh 16:21:20 +++ '[' -z pap ']' 16:21:20 +++ '[' -n apex-pdp ']' 16:21:20 +++ '[' apex-pdp == logs ']' 16:21:20 +++ '[' true = true ']' 16:21:20 +++ echo 'Starting apex-pdp application with Grafana' 16:21:20 Starting apex-pdp application with Grafana 16:21:20 +++ docker-compose up -d apex-pdp grafana 16:21:21 Creating network "compose_default" with the default driver 16:21:22 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 16:21:22 latest: Pulling from prom/prometheus 16:21:26 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 16:21:26 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 16:21:26 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 16:21:26 latest: Pulling from grafana/grafana 16:21:32 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 16:21:32 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 16:21:32 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 16:21:34 10.10.2: Pulling from mariadb 16:21:39 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 16:21:39 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 16:21:39 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 16:21:39 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 16:21:44 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 16:21:44 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 16:21:44 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 16:21:45 latest: Pulling from confluentinc/cp-zookeeper 16:22:20 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 16:22:21 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 16:22:22 Pulling kafka (confluentinc/cp-kafka:latest)... 16:22:29 latest: Pulling from confluentinc/cp-kafka 16:22:37 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 16:22:37 Status: Downloaded newer image for confluentinc/cp-kafka:latest 16:22:37 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 16:22:37 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 16:22:42 Digest: sha256:59b5cc74cb5bbcb86c2e85d974415cfa4a6270c5728a7a489a5c6eece42f2b45 16:22:42 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 16:22:42 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 16:22:42 3.1.2-SNAPSHOT: Pulling from onap/policy-api 16:22:44 Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 16:22:44 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 16:22:44 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 16:22:44 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 16:22:46 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 16:22:46 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 16:22:46 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 16:22:47 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 16:22:54 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 16:22:54 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 16:22:54 Creating mariadb ... 16:22:54 Creating simulator ... 16:22:54 Creating prometheus ... 16:22:54 Creating compose_zookeeper_1 ... 16:23:09 Creating compose_zookeeper_1 ... done 16:23:09 Creating kafka ... 16:23:10 Creating kafka ... done 16:23:11 Creating simulator ... done 16:23:12 Creating prometheus ... done 16:23:12 Creating grafana ... 16:23:13 Creating grafana ... done 16:23:15 Creating mariadb ... done 16:23:15 Creating policy-db-migrator ... 16:23:15 Creating policy-db-migrator ... done 16:23:15 Creating policy-api ... 16:23:16 Creating policy-api ... done 16:23:16 Creating policy-pap ... 16:23:17 Creating policy-pap ... done 16:23:17 Creating policy-apex-pdp ... 16:23:18 Creating policy-apex-pdp ... done 16:23:18 +++ echo 'Prometheus server: http://localhost:30259' 16:23:18 Prometheus server: http://localhost:30259 16:23:18 +++ echo 'Grafana server: http://localhost:30269' 16:23:18 Grafana server: http://localhost:30269 16:23:18 +++ cd /w/workspace/policy-pap-master-project-csit-pap 16:23:18 ++ sleep 10 16:23:28 ++ unset http_proxy https_proxy 16:23:28 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 16:23:28 Waiting for REST to come up on localhost port 30003... 16:23:29 NAMES STATUS 16:23:29 policy-apex-pdp Up 10 seconds 16:23:29 policy-pap Up 11 seconds 16:23:29 policy-api Up 12 seconds 16:23:29 policy-db-migrator Up 13 seconds 16:23:29 grafana Up 15 seconds 16:23:29 kafka Up 18 seconds 16:23:29 compose_zookeeper_1 Up 19 seconds 16:23:29 simulator Up 17 seconds 16:23:29 prometheus Up 16 seconds 16:23:29 mariadb Up 14 seconds 16:23:34 NAMES STATUS 16:23:34 policy-apex-pdp Up 15 seconds 16:23:34 policy-pap Up 16 seconds 16:23:34 policy-api Up 17 seconds 16:23:34 grafana Up 20 seconds 16:23:34 kafka Up 23 seconds 16:23:34 compose_zookeeper_1 Up 24 seconds 16:23:34 simulator Up 22 seconds 16:23:34 prometheus Up 21 seconds 16:23:34 mariadb Up 19 seconds 16:23:39 NAMES STATUS 16:23:39 policy-apex-pdp Up 20 seconds 16:23:39 policy-pap Up 21 seconds 16:23:39 policy-api Up 22 seconds 16:23:39 grafana Up 25 seconds 16:23:39 kafka Up 28 seconds 16:23:39 compose_zookeeper_1 Up 29 seconds 16:23:39 simulator Up 27 seconds 16:23:39 prometheus Up 26 seconds 16:23:39 mariadb Up 24 seconds 16:23:44 NAMES STATUS 16:23:44 policy-apex-pdp Up 25 seconds 16:23:44 policy-pap Up 26 seconds 16:23:44 policy-api Up 27 seconds 16:23:44 grafana Up 30 seconds 16:23:44 kafka Up 33 seconds 16:23:44 compose_zookeeper_1 Up 34 seconds 16:23:44 simulator Up 32 seconds 16:23:44 prometheus Up 31 seconds 16:23:44 mariadb Up 29 seconds 16:23:49 NAMES STATUS 16:23:49 policy-apex-pdp Up 30 seconds 16:23:49 policy-pap Up 31 seconds 16:23:49 policy-api Up 32 seconds 16:23:49 grafana Up 35 seconds 16:23:49 kafka Up 38 seconds 16:23:49 compose_zookeeper_1 Up 39 seconds 16:23:49 simulator Up 37 seconds 16:23:49 prometheus Up 36 seconds 16:23:49 mariadb Up 34 seconds 16:23:54 NAMES STATUS 16:23:54 policy-apex-pdp Up 35 seconds 16:23:54 policy-pap Up 36 seconds 16:23:54 policy-api Up 37 seconds 16:23:54 grafana Up 40 seconds 16:23:54 kafka Up 43 seconds 16:23:54 compose_zookeeper_1 Up 44 seconds 16:23:54 simulator Up 42 seconds 16:23:54 prometheus Up 41 seconds 16:23:54 mariadb Up 39 seconds 16:23:59 NAMES STATUS 16:23:59 policy-apex-pdp Up 40 seconds 16:23:59 policy-pap Up 41 seconds 16:23:59 policy-api Up 42 seconds 16:23:59 grafana Up 45 seconds 16:23:59 kafka Up 48 seconds 16:23:59 compose_zookeeper_1 Up 49 seconds 16:23:59 simulator Up 47 seconds 16:23:59 prometheus Up 46 seconds 16:23:59 mariadb Up 44 seconds 16:23:59 ++ export 'SUITES=pap-test.robot 16:23:59 pap-slas.robot' 16:23:59 ++ SUITES='pap-test.robot 16:23:59 pap-slas.robot' 16:23:59 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:23:59 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 16:23:59 + load_set 16:23:59 + _setopts=hxB 16:23:59 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:23:59 ++ tr : ' ' 16:23:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:23:59 + set +o braceexpand 16:23:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:23:59 + set +o hashall 16:23:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:23:59 + set +o interactive-comments 16:23:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:23:59 + set +o xtrace 16:23:59 ++ echo hxB 16:23:59 ++ sed 's/./& /g' 16:23:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:23:59 + set +h 16:23:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:23:59 + set +x 16:23:59 + docker_stats 16:23:59 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 16:23:59 ++ uname -s 16:23:59 + '[' Linux == Darwin ']' 16:23:59 + sh -c 'top -bn1 | head -3' 16:23:59 top - 16:23:59 up 5 min, 0 users, load average: 3.39, 1.69, 0.71 16:23:59 Tasks: 204 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 16:23:59 %Cpu(s): 11.1 us, 2.3 sy, 0.0 ni, 79.0 id, 7.5 wa, 0.0 hi, 0.0 si, 0.1 st 16:23:59 + echo 16:23:59 + sh -c 'free -h' 16:23:59 16:23:59 total used free shared buff/cache available 16:23:59 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 16:23:59 Swap: 1.0G 0B 1.0G 16:23:59 + echo 16:23:59 16:23:59 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:23:59 NAMES STATUS 16:23:59 policy-apex-pdp Up 40 seconds 16:23:59 policy-pap Up 41 seconds 16:23:59 policy-api Up 42 seconds 16:23:59 grafana Up 45 seconds 16:23:59 kafka Up 48 seconds 16:23:59 compose_zookeeper_1 Up 49 seconds 16:23:59 simulator Up 47 seconds 16:23:59 prometheus Up 46 seconds 16:23:59 mariadb Up 44 seconds 16:23:59 + echo 16:23:59 + docker stats --no-stream 16:23:59 16:24:02 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 16:24:02 11115781713d policy-apex-pdp 2.03% 181.6MiB / 31.41GiB 0.56% 9.57kB / 8.98kB 0B / 0B 48 16:24:02 6570ad387f23 policy-pap 2.16% 559.1MiB / 31.41GiB 1.74% 32.6kB / 33.7kB 0B / 153MB 65 16:24:02 587031420d2e policy-api 0.14% 533MiB / 31.41GiB 1.66% 1MB / 711kB 0B / 0B 56 16:24:02 2fc7360130ba grafana 0.02% 53.16MiB / 31.41GiB 0.17% 19.1kB / 3.38kB 0B / 24MB 17 16:24:02 f744b0809568 kafka 8.41% 383.6MiB / 31.41GiB 1.19% 71.3kB / 75.2kB 0B / 479kB 85 16:24:02 a19483e1dacd compose_zookeeper_1 0.22% 95.43MiB / 31.41GiB 0.30% 53.6kB / 46.8kB 4.1kB / 434kB 60 16:24:02 a72de0c2037a simulator 0.10% 119.6MiB / 31.41GiB 0.37% 1.36kB / 0B 225kB / 0B 76 16:24:02 2cdf50104aef prometheus 0.34% 19.61MiB / 31.41GiB 0.06% 1.64kB / 474B 0B / 0B 11 16:24:02 ee40002f230d mariadb 0.03% 101.6MiB / 31.41GiB 0.32% 997kB / 1.19MB 11MB / 66.9MB 40 16:24:02 + echo 16:24:02 16:24:02 + cd /tmp/tmp.aKPhtjj3Wq 16:24:02 + echo 'Reading the testplan:' 16:24:02 Reading the testplan: 16:24:02 + echo 'pap-test.robot 16:24:02 pap-slas.robot' 16:24:02 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 16:24:02 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 16:24:02 + cat testplan.txt 16:24:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 16:24:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 16:24:02 ++ xargs 16:24:02 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 16:24:02 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:24:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 16:24:02 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:24:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 16:24:02 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 16:24:02 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 16:24:02 + relax_set 16:24:02 + set +e 16:24:02 + set +o pipefail 16:24:02 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 16:24:02 ============================================================================== 16:24:02 pap 16:24:02 ============================================================================== 16:24:02 pap.Pap-Test 16:24:02 ============================================================================== 16:24:03 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 16:24:03 ------------------------------------------------------------------------------ 16:24:03 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 16:24:03 ------------------------------------------------------------------------------ 16:24:04 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 16:24:04 ------------------------------------------------------------------------------ 16:24:04 Healthcheck :: Verify policy pap health check | PASS | 16:24:04 ------------------------------------------------------------------------------ 16:24:25 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 16:24:25 ------------------------------------------------------------------------------ 16:24:25 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 16:24:25 ------------------------------------------------------------------------------ 16:24:26 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 16:24:26 ------------------------------------------------------------------------------ 16:24:26 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 16:24:26 ------------------------------------------------------------------------------ 16:24:26 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 16:24:26 ------------------------------------------------------------------------------ 16:24:26 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 16:24:26 ------------------------------------------------------------------------------ 16:24:27 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 16:24:27 ------------------------------------------------------------------------------ 16:24:27 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 16:24:27 ------------------------------------------------------------------------------ 16:24:27 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 16:24:27 ------------------------------------------------------------------------------ 16:24:27 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 16:24:27 ------------------------------------------------------------------------------ 16:24:27 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 16:24:27 ------------------------------------------------------------------------------ 16:24:28 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 16:24:28 ------------------------------------------------------------------------------ 16:24:28 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 16:24:28 ------------------------------------------------------------------------------ 16:24:48 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 16:24:48 pdpTypeC != pdpTypeA 16:24:48 ------------------------------------------------------------------------------ 16:24:48 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 16:24:48 ------------------------------------------------------------------------------ 16:24:48 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 16:24:48 ------------------------------------------------------------------------------ 16:24:49 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 16:24:49 ------------------------------------------------------------------------------ 16:24:49 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 16:24:49 ------------------------------------------------------------------------------ 16:24:49 pap.Pap-Test | FAIL | 16:24:49 22 tests, 21 passed, 1 failed 16:24:49 ============================================================================== 16:24:49 pap.Pap-Slas 16:24:49 ============================================================================== 16:25:49 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 16:25:49 ------------------------------------------------------------------------------ 16:25:49 pap.Pap-Slas | PASS | 16:25:49 8 tests, 8 passed, 0 failed 16:25:49 ============================================================================== 16:25:49 pap | FAIL | 16:25:49 30 tests, 29 passed, 1 failed 16:25:49 ============================================================================== 16:25:49 Output: /tmp/tmp.aKPhtjj3Wq/output.xml 16:25:49 Log: /tmp/tmp.aKPhtjj3Wq/log.html 16:25:49 Report: /tmp/tmp.aKPhtjj3Wq/report.html 16:25:49 + RESULT=1 16:25:49 + load_set 16:25:49 + _setopts=hxB 16:25:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:25:49 ++ tr : ' ' 16:25:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:25:49 + set +o braceexpand 16:25:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:25:49 + set +o hashall 16:25:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:25:49 + set +o interactive-comments 16:25:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:25:49 + set +o xtrace 16:25:49 ++ echo hxB 16:25:49 ++ sed 's/./& /g' 16:25:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:25:49 + set +h 16:25:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:25:49 + set +x 16:25:49 + echo 'RESULT: 1' 16:25:49 RESULT: 1 16:25:49 + exit 1 16:25:49 + on_exit 16:25:49 + rc=1 16:25:49 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 16:25:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:25:49 NAMES STATUS 16:25:49 policy-apex-pdp Up 2 minutes 16:25:49 policy-pap Up 2 minutes 16:25:49 policy-api Up 2 minutes 16:25:49 grafana Up 2 minutes 16:25:49 kafka Up 2 minutes 16:25:49 compose_zookeeper_1 Up 2 minutes 16:25:49 simulator Up 2 minutes 16:25:49 prometheus Up 2 minutes 16:25:49 mariadb Up 2 minutes 16:25:49 + docker_stats 16:25:49 ++ uname -s 16:25:49 + '[' Linux == Darwin ']' 16:25:49 + sh -c 'top -bn1 | head -3' 16:25:49 top - 16:25:49 up 7 min, 0 users, load average: 0.84, 1.33, 0.70 16:25:49 Tasks: 195 total, 1 running, 128 sleeping, 0 stopped, 0 zombie 16:25:49 %Cpu(s): 9.4 us, 1.8 sy, 0.0 ni, 83.0 id, 5.7 wa, 0.0 hi, 0.0 si, 0.0 st 16:25:49 + echo 16:25:49 16:25:49 + sh -c 'free -h' 16:25:49 total used free shared buff/cache available 16:25:49 Mem: 31G 3.0G 22G 1.3M 6.2G 27G 16:25:49 Swap: 1.0G 0B 1.0G 16:25:49 + echo 16:25:49 16:25:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:25:49 NAMES STATUS 16:25:49 policy-apex-pdp Up 2 minutes 16:25:49 policy-pap Up 2 minutes 16:25:49 policy-api Up 2 minutes 16:25:49 grafana Up 2 minutes 16:25:49 kafka Up 2 minutes 16:25:49 compose_zookeeper_1 Up 2 minutes 16:25:49 simulator Up 2 minutes 16:25:49 prometheus Up 2 minutes 16:25:49 mariadb Up 2 minutes 16:25:49 + echo 16:25:49 16:25:49 + docker stats --no-stream 16:25:52 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 16:25:52 11115781713d policy-apex-pdp 0.48% 193.6MiB / 31.41GiB 0.60% 57.4kB / 81.9kB 0B / 0B 52 16:25:52 6570ad387f23 policy-pap 0.38% 654.2MiB / 31.41GiB 2.03% 2.34MB / 818kB 0B / 153MB 69 16:25:52 587031420d2e policy-api 0.10% 604.9MiB / 31.41GiB 1.88% 2.49MB / 1.27MB 0B / 0B 58 16:25:52 2fc7360130ba grafana 0.04% 63.07MiB / 31.41GiB 0.20% 19.8kB / 4.33kB 0B / 24MB 17 16:25:52 f744b0809568 kafka 3.54% 390.3MiB / 31.41GiB 1.21% 241kB / 216kB 0B / 586kB 85 16:25:52 a19483e1dacd compose_zookeeper_1 0.11% 96.52MiB / 31.41GiB 0.30% 56.5kB / 48.4kB 4.1kB / 434kB 60 16:25:52 a72de0c2037a simulator 0.07% 119.8MiB / 31.41GiB 0.37% 1.58kB / 0B 225kB / 0B 78 16:25:52 2cdf50104aef prometheus 0.00% 25.44MiB / 31.41GiB 0.08% 179kB / 10.2kB 0B / 0B 11 16:25:52 ee40002f230d mariadb 0.02% 103MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 67.2MB 28 16:25:52 + echo 16:25:52 16:25:52 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 16:25:52 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 16:25:52 + relax_set 16:25:52 + set +e 16:25:52 + set +o pipefail 16:25:52 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 16:25:52 ++ echo 'Shut down started!' 16:25:52 Shut down started! 16:25:52 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 16:25:52 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 16:25:52 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 16:25:52 ++ source export-ports.sh 16:25:52 ++ source get-versions.sh 16:25:55 ++ echo 'Collecting logs from docker compose containers...' 16:25:55 Collecting logs from docker compose containers... 16:25:55 ++ docker-compose logs 16:25:57 ++ cat docker_compose.log 16:25:57 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, compose_zookeeper_1, simulator, prometheus, mariadb 16:25:57 zookeeper_1 | ===> User 16:25:57 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 16:25:57 zookeeper_1 | ===> Configuring ... 16:25:57 zookeeper_1 | ===> Running preflight checks ... 16:25:57 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 16:25:57 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 16:25:57 zookeeper_1 | ===> Launching ... 16:25:57 zookeeper_1 | ===> Launching zookeeper ... 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,410] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,418] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,418] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,418] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,418] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,420] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,420] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,420] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,420] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,421] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,421] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,422] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,422] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,422] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,422] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,422] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,436] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,439] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,439] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,443] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095123062Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-21T16:23:14Z 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095552165Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095567685Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095571365Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095574335Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095577005Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095579615Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095582545Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095585475Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095588345Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095591295Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095594265Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095596945Z level=info msg=Target target=[all] 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095714176Z level=info msg="Path Home" path=/usr/share/grafana 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095728356Z level=info msg="Path Data" path=/var/lib/grafana 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095731506Z level=info msg="Path Logs" path=/var/log/grafana 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095733956Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095740576Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 16:25:57 grafana | logger=settings t=2024-02-21T16:23:14.095744396Z level=info msg="App mode production" 16:25:57 grafana | logger=sqlstore t=2024-02-21T16:23:14.096147829Z level=info msg="Connecting to DB" dbtype=sqlite3 16:25:57 grafana | logger=sqlstore t=2024-02-21T16:23:14.096168419Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.097476046Z level=info msg="Starting DB migrations" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.098462803Z level=info msg="Executing migration" id="create migration_log table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.099295778Z level=info msg="Migration successfully executed" id="create migration_log table" duration=832.375µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.103577384Z level=info msg="Executing migration" id="create user table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.104702482Z level=info msg="Migration successfully executed" id="create user table" duration=1.122718ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.109776983Z level=info msg="Executing migration" id="add unique index user.login" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.110678878Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=902.225µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.114114819Z level=info msg="Executing migration" id="add unique index user.email" 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,454] INFO (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:host.name=a19483e1dacd (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.114958714Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=843.695µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.118213384Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.119024349Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=810.845µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.124166901Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.125012636Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=846.035µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.128218976Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.131288685Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.068649ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.134339193Z level=info msg="Executing migration" id="create user table v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.135172379Z level=info msg="Migration successfully executed" id="create user table v2" duration=832.386µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.138438099Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.139369984Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=931.215µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.144164365Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.145058149Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=892.144µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.14833499Z level=info msg="Executing migration" id="copy data_source v1 to v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.148847683Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=511.973µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.152281174Z level=info msg="Executing migration" id="Drop old table user_v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.152932148Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=650.254µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.157775158Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.159638259Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.862431ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.163409353Z level=info msg="Executing migration" id="Update user table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.163452483Z level=info msg="Migration successfully executed" id="Update user table charset" duration=44.91µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.167405657Z level=info msg="Executing migration" id="Add last_seen_at column to user" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.168758185Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.351818ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.171857775Z level=info msg="Executing migration" id="Add missing user data" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.172273217Z level=info msg="Migration successfully executed" id="Add missing user data" duration=416.912µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.177334029Z level=info msg="Executing migration" id="Add is_disabled column to user" 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,455] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,456] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,457] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,458] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,458] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,459] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,459] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,460] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,462] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,462] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,463] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,463] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,463] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,483] INFO Logging initialized @606ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,582] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,582] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,603] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,634] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,635] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,636] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,639] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,650] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,661] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,662] INFO Started @784ms (org.eclipse.jetty.server.Server) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,662] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,666] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,667] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,668] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,670] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,686] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,687] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,688] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,688] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,693] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,693] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,696] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,697] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,698] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,714] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,716] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,727] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 16:25:57 zookeeper_1 | [2024-02-21 16:23:13,729] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 16:25:57 zookeeper_1 | [2024-02-21 16:23:14,770] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 16:25:57 kafka | ===> User 16:25:57 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 16:25:57 kafka | ===> Configuring ... 16:25:57 kafka | Running in Zookeeper mode... 16:25:57 kafka | ===> Running preflight checks ... 16:25:57 kafka | ===> Check if /var/lib/kafka/data is writable ... 16:25:57 kafka | ===> Check if Zookeeper is healthy ... 16:25:57 kafka | SLF4J: Class path contains multiple SLF4J bindings. 16:25:57 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 16:25:57 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 16:25:57 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 16:25:57 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 16:25:57 kafka | [2024-02-21 16:23:14,714] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:host.name=f744b0809568 (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,715] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,718] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,722] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 16:25:57 kafka | [2024-02-21 16:23:14,726] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 16:25:57 kafka | [2024-02-21 16:23:14,733] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | [2024-02-21 16:23:14,747] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | [2024-02-21 16:23:14,747] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | [2024-02-21 16:23:14,754] INFO Socket connection established, initiating session, client: /172.17.0.6:55252, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | [2024-02-21 16:23:14,791] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000493b70000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | [2024-02-21 16:23:14,912] INFO Session: 0x100000493b70000 closed (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:14,912] INFO EventThread shut down for session: 0x100000493b70000 (org.apache.zookeeper.ClientCnxn) 16:25:57 kafka | Using log4j config /etc/kafka/log4j.properties 16:25:57 kafka | ===> Launching ... 16:25:57 kafka | ===> Launching kafka ... 16:25:57 kafka | [2024-02-21 16:23:15,596] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 16:25:57 kafka | [2024-02-21 16:23:15,950] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 16:25:57 kafka | [2024-02-21 16:23:16,065] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 16:25:57 kafka | [2024-02-21 16:23:16,066] INFO starting (kafka.server.KafkaServer) 16:25:57 kafka | [2024-02-21 16:23:16,066] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 16:25:57 kafka | [2024-02-21 16:23:16,080] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:host.name=f744b0809568 (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.178634096Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.299377ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.181991687Z level=info msg="Executing migration" id="Add index user.login/user.email" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.183013413Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.021906ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.186315743Z level=info msg="Executing migration" id="Add is_service_account column to user" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.187676932Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.358209ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.190805371Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.201120214Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.309203ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.205894193Z level=info msg="Executing migration" id="create temp user table v1-7" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.206565468Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=671.535µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.209303834Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.209936729Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=633.175µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.212724666Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.213597272Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=872.326µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.217738007Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.218609202Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=869.366µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.221724371Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.222600036Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=875.705µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.225379523Z level=info msg="Executing migration" id="Update temp_user table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.225409324Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=30.931µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.230135643Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.231043308Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=908.625µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.233956556Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.234751231Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=794.995µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.237571469Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.238323173Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=751.424µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.242829741Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.243623976Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=794.265µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.246595394Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.250510109Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.913635ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.253442136Z level=info msg="Executing migration" id="create temp_user v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.254349962Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=907.256µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.258932801Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.259816785Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=888.214µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.262816794Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.2637096Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=892.796µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.266532547Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.267435142Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=902.585µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.272043541Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.272925957Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=882.366µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.275826794Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.276365037Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=537.853µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.279359226Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.28006179Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=702.134µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.283388511Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.283851484Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=462.423µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.288355521Z level=info msg="Executing migration" id="create star table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.289093666Z level=info msg="Migration successfully executed" id="create star table" duration=737.605µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.292039494Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.292922649Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=880.445µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.296152379Z level=info msg="Executing migration" id="create org table v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.296983374Z level=info msg="Migration successfully executed" id="create org table v1" duration=830.495µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.303069882Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.30439796Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.327468ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.30770767Z level=info msg="Executing migration" id="create org_user table v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.309062319Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.353729ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.312154567Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.313090793Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=936.076µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.316167992Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.317132959Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=964.537µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.322615652Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.323562797Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=947.355µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.326833848Z level=info msg="Executing migration" id="Update org table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.326860428Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.34µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.329972437Z level=info msg="Executing migration" id="Update org_user table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.329998477Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.08µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.332520943Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.332694044Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=173.021µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.33681772Z level=info msg="Executing migration" id="create dashboard table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.337540353Z level=info msg="Migration successfully executed" id="create dashboard table" duration=722.323µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.340623833Z level=info msg="Executing migration" id="add index dashboard.account_id" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.341420048Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=795.475µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.344734098Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.345598794Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=864.126µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.348715572Z level=info msg="Executing migration" id="create dashboard_tag table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.349406707Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=690.705µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.353769984Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.354588908Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=818.444µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.357809588Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.358914666Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.104138ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.362417087Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.370674828Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.257501ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.375428157Z level=info msg="Executing migration" id="create dashboard v2" 16:25:57 mariadb | 2024-02-21 16:23:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 16:25:57 mariadb | 2024-02-21 16:23:15+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 16:25:57 mariadb | 2024-02-21 16:23:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 16:25:57 mariadb | 2024-02-21 16:23:15+00:00 [Note] [Entrypoint]: Initializing database files 16:25:57 mariadb | 2024-02-21 16:23:15 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:25:57 mariadb | 2024-02-21 16:23:15 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:25:57 mariadb | 2024-02-21 16:23:15 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:25:57 mariadb | 16:25:57 mariadb | 16:25:57 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 16:25:57 mariadb | To do so, start the server, then issue the following command: 16:25:57 mariadb | 16:25:57 mariadb | '/usr/bin/mysql_secure_installation' 16:25:57 mariadb | 16:25:57 mariadb | which will also give you the option of removing the test 16:25:57 mariadb | databases and anonymous user created by default. This is 16:25:57 mariadb | strongly recommended for production servers. 16:25:57 mariadb | 16:25:57 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 16:25:57 mariadb | 16:25:57 mariadb | Please report any problems at https://mariadb.org/jira 16:25:57 mariadb | 16:25:57 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 16:25:57 mariadb | 16:25:57 mariadb | Consider joining MariaDB's strong and vibrant community: 16:25:57 mariadb | https://mariadb.org/get-involved/ 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:16+00:00 [Note] [Entrypoint]: Database files initialized 16:25:57 mariadb | 2024-02-21 16:23:16+00:00 [Note] [Entrypoint]: Starting temporary server 16:25:57 mariadb | 2024-02-21 16:23:16+00:00 [Note] [Entrypoint]: Waiting for server startup 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Number of transaction pools: 1 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Completed initialization of buffer pool 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: 128 rollback segments are active. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] InnoDB: log sequence number 46590; transaction id 14 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] Plugin 'FEEDBACK' is disabled. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 16:25:57 mariadb | 2024-02-21 16:23:16 0 [Note] mariadbd: ready for connections. 16:25:57 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 16:25:57 mariadb | 2024-02-21 16:23:17+00:00 [Note] [Entrypoint]: Temporary server started. 16:25:57 mariadb | 2024-02-21 16:23:19+00:00 [Note] [Entrypoint]: Creating user policy_user 16:25:57 mariadb | 2024-02-21 16:23:19+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:19+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:19+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 16:25:57 mariadb | #!/bin/bash -xv 16:25:57 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 16:25:57 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 16:25:57 mariadb | # 16:25:57 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 16:25:57 mariadb | # you may not use this file except in compliance with the License. 16:25:57 mariadb | # You may obtain a copy of the License at 16:25:57 mariadb | # 16:25:57 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 16:25:57 mariadb | # 16:25:57 mariadb | # Unless required by applicable law or agreed to in writing, software 16:25:57 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 16:25:57 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16:25:57 mariadb | # See the License for the specific language governing permissions and 16:25:57 mariadb | # limitations under the License. 16:25:57 mariadb | 16:25:57 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | do 16:25:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 16:25:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 16:25:57 mariadb | done 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:25:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 16:25:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:25:57 mariadb | 16:25:57 policy-apex-pdp | Waiting for mariadb port 3306... 16:25:57 policy-apex-pdp | mariadb (172.17.0.2:3306) open 16:25:57 policy-apex-pdp | Waiting for kafka port 9092... 16:25:57 policy-apex-pdp | kafka (172.17.0.6:9092) open 16:25:57 policy-apex-pdp | Waiting for pap port 6969... 16:25:57 policy-apex-pdp | pap (172.17.0.10:6969) open 16:25:57 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 16:25:57 policy-apex-pdp | [2024-02-21T16:23:55.776+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 16:25:57 policy-apex-pdp | [2024-02-21T16:23:55.971+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 policy-apex-pdp | allow.auto.create.topics = true 16:25:57 policy-apex-pdp | auto.commit.interval.ms = 5000 16:25:57 policy-apex-pdp | auto.include.jmx.reporter = true 16:25:57 policy-apex-pdp | auto.offset.reset = latest 16:25:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:25:57 policy-apex-pdp | check.crcs = true 16:25:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:25:57 policy-apex-pdp | client.id = consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-1 16:25:57 policy-apex-pdp | client.rack = 16:25:57 policy-apex-pdp | connections.max.idle.ms = 540000 16:25:57 policy-apex-pdp | default.api.timeout.ms = 60000 16:25:57 policy-apex-pdp | enable.auto.commit = true 16:25:57 policy-apex-pdp | exclude.internal.topics = true 16:25:57 policy-apex-pdp | fetch.max.bytes = 52428800 16:25:57 policy-apex-pdp | fetch.max.wait.ms = 500 16:25:57 policy-apex-pdp | fetch.min.bytes = 1 16:25:57 policy-apex-pdp | group.id = 8aee6ac5-f217-4030-aeed-72326ff1d45e 16:25:57 policy-apex-pdp | group.instance.id = null 16:25:57 policy-apex-pdp | heartbeat.interval.ms = 3000 16:25:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 16:25:57 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 16:25:57 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 16:25:57 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:20+00:00 [Note] [Entrypoint]: Stopping temporary server 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: FTS optimize thread exiting. 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Starting shutdown... 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Buffer pool(s) dump completed at 240221 16:23:20 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Shutdown completed; log sequence number 330185; transaction id 298 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] mariadbd: Shutdown complete 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:20+00:00 [Note] [Entrypoint]: Temporary server stopped 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:20+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 16:25:57 mariadb | 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Number of transaction pools: 1 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: Completed initialization of buffer pool 16:25:57 mariadb | 2024-02-21 16:23:20 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: 128 rollback segments are active. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: log sequence number 330185; transaction id 299 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] Plugin 'FEEDBACK' is disabled. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] Server socket created on IP: '0.0.0.0'. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] Server socket created on IP: '::'. 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] mariadbd: ready for connections. 16:25:57 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 16:25:57 mariadb | 2024-02-21 16:23:21 0 [Note] InnoDB: Buffer pool(s) load completed at 240221 16:23:21 16:25:57 mariadb | 2024-02-21 16:23:21 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 16:25:57 mariadb | 2024-02-21 16:23:21 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 16:25:57 mariadb | 2024-02-21 16:23:21 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 16:25:57 mariadb | 2024-02-21 16:23:21 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.376264502Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=832.165µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.379747373Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.380565859Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=816.986µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.383809569Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.384605724Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=795.945µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.38893131Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.389265362Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=332.372µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.392403702Z level=info msg="Executing migration" id="drop table dashboard_v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.393218217Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=814.325µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.39713736Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.397221131Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=84.041µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.400539382Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.402419463Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.876491ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.405492572Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.407483314Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.989722ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.410999436Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.412723167Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.723911ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.417071563Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.417866418Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=794.835µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.421103079Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.423806295Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.700706ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.427580568Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.428803526Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.222088ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.433230063Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.434103998Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=874.385µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.437292048Z level=info msg="Executing migration" id="Update dashboard table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.437319578Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=28.29µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.440359957Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.440389107Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=31.56µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.444343211Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.446352204Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.008453ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.450768511Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.452770293Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.001252ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.455779691Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.457782153Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.004162ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.461669608Z level=info msg="Executing migration" id="Add column uid in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.463507679Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.838521ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.466414296Z level=info msg="Executing migration" id="Update uid column values in dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.466588208Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=173.902µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.469482365Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.47023968Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=758.445µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.473286779Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.474061914Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=775.015µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.477893387Z level=info msg="Executing migration" id="Update dashboard title length" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.477914537Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=21.68µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.480319912Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.481053217Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=732.635µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.484277037Z level=info msg="Executing migration" id="create dashboard_provisioning" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.484910901Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=633.764µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.489000106Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.49611719Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.116274ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.499251779Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.499707801Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=455.792µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.502608729Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.503120742Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=511.693µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.506943667Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.508180244Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.235567ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.511675815Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.512167768Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=492.393µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.51563185Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.516206074Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=574.794µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.520326208Z level=info msg="Executing migration" id="Add check_sum column" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.522354491Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.027793ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.525624051Z level=info msg="Executing migration" id="Add index for dashboard_title" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.526437116Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=812.915µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.529812237Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.529976198Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=161.931µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.534136043Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.534295864Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=160.351µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.537449914Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.538242839Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=792.695µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.541327928Z level=info msg="Executing migration" id="Add isPublic for dashboard" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.543414811Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.086723ms 16:25:57 policy-apex-pdp | interceptor.classes = [] 16:25:57 policy-apex-pdp | internal.leave.group.on.close = true 16:25:57 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 policy-apex-pdp | isolation.level = read_uncommitted 16:25:57 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-apex-pdp | max.partition.fetch.bytes = 1048576 16:25:57 policy-apex-pdp | max.poll.interval.ms = 300000 16:25:57 policy-apex-pdp | max.poll.records = 500 16:25:57 policy-apex-pdp | metadata.max.age.ms = 300000 16:25:57 policy-apex-pdp | metric.reporters = [] 16:25:57 policy-apex-pdp | metrics.num.samples = 2 16:25:57 policy-apex-pdp | metrics.recording.level = INFO 16:25:57 policy-apex-pdp | metrics.sample.window.ms = 30000 16:25:57 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 policy-apex-pdp | receive.buffer.bytes = 65536 16:25:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:25:57 policy-apex-pdp | reconnect.backoff.ms = 50 16:25:57 policy-apex-pdp | request.timeout.ms = 30000 16:25:57 policy-apex-pdp | retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.client.callback.handler.class = null 16:25:57 policy-apex-pdp | sasl.jaas.config = null 16:25:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 policy-apex-pdp | sasl.kerberos.service.name = null 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 policy-apex-pdp | sasl.login.callback.handler.class = null 16:25:57 policy-apex-pdp | sasl.login.class = null 16:25:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:25:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:25:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:25:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:25:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.mechanism = GSSAPI 16:25:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 16:25:57 prometheus | ts=2024-02-21T16:23:13.022Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 16:25:57 prometheus | ts=2024-02-21T16:23:13.024Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 16:25:57 prometheus | ts=2024-02-21T16:23:13.024Z caller=main.go:1039 level=info msg="Starting TSDB ..." 16:25:57 prometheus | ts=2024-02-21T16:23:13.032Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 16:25:57 prometheus | ts=2024-02-21T16:23:13.032Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 16:25:57 prometheus | ts=2024-02-21T16:23:13.034Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 16:25:57 prometheus | ts=2024-02-21T16:23:13.034Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.91µs 16:25:57 prometheus | ts=2024-02-21T16:23:13.034Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 16:25:57 prometheus | ts=2024-02-21T16:23:13.036Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 16:25:57 prometheus | ts=2024-02-21T16:23:13.036Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=26.3µs wal_replay_duration=1.929191ms wbl_replay_duration=300ns total_replay_duration=1.981742ms 16:25:57 prometheus | ts=2024-02-21T16:23:13.038Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 16:25:57 prometheus | ts=2024-02-21T16:23:13.038Z caller=main.go:1063 level=info msg="TSDB started" 16:25:57 prometheus | ts=2024-02-21T16:23:13.038Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 16:25:57 prometheus | ts=2024-02-21T16:23:13.039Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=918.136µs db_storage=1.79µs remote_storage=2.58µs web_handler=1.21µs query_engine=1.33µs scrape=207.201µs scrape_sd=119.161µs notify=30.39µs notify_sd=13.44µs rules=2.27µs tracing=5.76µs 16:25:57 prometheus | ts=2024-02-21T16:23:13.039Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 16:25:57 prometheus | ts=2024-02-21T16:23:13.039Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-apex-pdp | security.protocol = PLAINTEXT 16:25:57 policy-apex-pdp | security.providers = null 16:25:57 policy-apex-pdp | send.buffer.bytes = 131072 16:25:57 policy-apex-pdp | session.timeout.ms = 45000 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-apex-pdp | ssl.cipher.suites = null 16:25:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:25:57 policy-apex-pdp | ssl.engine.factory.class = null 16:25:57 policy-apex-pdp | ssl.key.password = null 16:25:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:25:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:25:57 policy-apex-pdp | ssl.keystore.key = null 16:25:57 policy-apex-pdp | ssl.keystore.location = null 16:25:57 policy-apex-pdp | ssl.keystore.password = null 16:25:57 policy-apex-pdp | ssl.keystore.type = JKS 16:25:57 policy-apex-pdp | ssl.protocol = TLSv1.3 16:25:57 policy-apex-pdp | ssl.provider = null 16:25:57 policy-apex-pdp | ssl.secure.random.implementation = null 16:25:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-apex-pdp | ssl.truststore.certificates = null 16:25:57 policy-apex-pdp | ssl.truststore.location = null 16:25:57 policy-apex-pdp | ssl.truststore.password = null 16:25:57 policy-apex-pdp | ssl.truststore.type = JKS 16:25:57 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-apex-pdp | 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.123+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.123+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.123+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532636121 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.125+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-1, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Subscribed to topic(s): policy-pdp-pap 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.138+00:00|INFO|ServiceManager|main] service manager starting 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.138+00:00|INFO|ServiceManager|main] service manager starting topics 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.142+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8aee6ac5-f217-4030-aeed-72326ff1d45e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.161+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 policy-apex-pdp | allow.auto.create.topics = true 16:25:57 policy-apex-pdp | auto.commit.interval.ms = 5000 16:25:57 policy-apex-pdp | auto.include.jmx.reporter = true 16:25:57 policy-apex-pdp | auto.offset.reset = latest 16:25:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:25:57 policy-apex-pdp | check.crcs = true 16:25:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:25:57 policy-apex-pdp | client.id = consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2 16:25:57 policy-apex-pdp | client.rack = 16:25:57 policy-api | Waiting for mariadb port 3306... 16:25:57 policy-api | mariadb (172.17.0.2:3306) open 16:25:57 policy-api | Waiting for policy-db-migrator port 6824... 16:25:57 policy-api | policy-db-migrator (172.17.0.8:6824) open 16:25:57 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 16:25:57 policy-api | 16:25:57 policy-api | . ____ _ __ _ _ 16:25:57 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 16:25:57 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 16:25:57 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 16:25:57 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 16:25:57 policy-api | =========|_|==============|___/=/_/_/_/ 16:25:57 policy-api | :: Spring Boot :: (v3.1.8) 16:25:57 policy-api | 16:25:57 policy-api | [2024-02-21T16:23:30.446+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 16:25:57 policy-api | [2024-02-21T16:23:30.448+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 16:25:57 policy-api | [2024-02-21T16:23:32.173+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 16:25:57 policy-api | [2024-02-21T16:23:32.280+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 93 ms. Found 6 JPA repository interfaces. 16:25:57 policy-api | [2024-02-21T16:23:32.751+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 16:25:57 policy-api | [2024-02-21T16:23:32.752+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 16:25:57 policy-api | [2024-02-21T16:23:33.462+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 16:25:57 policy-api | [2024-02-21T16:23:33.473+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 16:25:57 policy-api | [2024-02-21T16:23:33.476+00:00|INFO|StandardService|main] Starting service [Tomcat] 16:25:57 policy-api | [2024-02-21T16:23:33.476+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 16:25:57 policy-api | [2024-02-21T16:23:33.573+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 16:25:57 policy-api | [2024-02-21T16:23:33.573+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3052 ms 16:25:57 policy-api | [2024-02-21T16:23:34.083+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 16:25:57 policy-api | [2024-02-21T16:23:34.163+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 16:25:57 policy-api | [2024-02-21T16:23:34.167+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 16:25:57 policy-api | [2024-02-21T16:23:34.212+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 16:25:57 policy-apex-pdp | connections.max.idle.ms = 540000 16:25:57 policy-apex-pdp | default.api.timeout.ms = 60000 16:25:57 policy-apex-pdp | enable.auto.commit = true 16:25:57 policy-apex-pdp | exclude.internal.topics = true 16:25:57 policy-apex-pdp | fetch.max.bytes = 52428800 16:25:57 policy-apex-pdp | fetch.max.wait.ms = 500 16:25:57 policy-apex-pdp | fetch.min.bytes = 1 16:25:57 policy-apex-pdp | group.id = 8aee6ac5-f217-4030-aeed-72326ff1d45e 16:25:57 policy-apex-pdp | group.instance.id = null 16:25:57 policy-apex-pdp | heartbeat.interval.ms = 3000 16:25:57 policy-apex-pdp | interceptor.classes = [] 16:25:57 policy-apex-pdp | internal.leave.group.on.close = true 16:25:57 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 policy-apex-pdp | isolation.level = read_uncommitted 16:25:57 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-apex-pdp | max.partition.fetch.bytes = 1048576 16:25:57 policy-apex-pdp | max.poll.interval.ms = 300000 16:25:57 policy-apex-pdp | max.poll.records = 500 16:25:57 policy-apex-pdp | metadata.max.age.ms = 300000 16:25:57 policy-apex-pdp | metric.reporters = [] 16:25:57 policy-apex-pdp | metrics.num.samples = 2 16:25:57 policy-apex-pdp | metrics.recording.level = INFO 16:25:57 policy-apex-pdp | metrics.sample.window.ms = 30000 16:25:57 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 policy-apex-pdp | receive.buffer.bytes = 65536 16:25:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:25:57 policy-apex-pdp | reconnect.backoff.ms = 50 16:25:57 policy-apex-pdp | request.timeout.ms = 30000 16:25:57 policy-apex-pdp | retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.client.callback.handler.class = null 16:25:57 policy-apex-pdp | sasl.jaas.config = null 16:25:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 policy-apex-pdp | sasl.kerberos.service.name = null 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 policy-apex-pdp | sasl.login.callback.handler.class = null 16:25:57 policy-apex-pdp | sasl.login.class = null 16:25:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:25:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:25:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:25:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:25:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.mechanism = GSSAPI 16:25:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-apex-pdp | security.protocol = PLAINTEXT 16:25:57 policy-apex-pdp | security.providers = null 16:25:57 policy-apex-pdp | send.buffer.bytes = 131072 16:25:57 policy-apex-pdp | session.timeout.ms = 45000 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-apex-pdp | ssl.cipher.suites = null 16:25:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:25:57 policy-apex-pdp | ssl.engine.factory.class = null 16:25:57 policy-apex-pdp | ssl.key.password = null 16:25:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:25:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:25:57 policy-apex-pdp | ssl.keystore.key = null 16:25:57 policy-db-migrator | Waiting for mariadb port 3306... 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.54647705Z level=info msg="Executing migration" id="create data_source table" 16:25:57 policy-apex-pdp | ssl.keystore.location = null 16:25:57 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 16:25:57 policy-pap | Waiting for mariadb port 3306... 16:25:57 policy-api | [2024-02-21T16:23:34.583+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 policy-apex-pdp | ssl.keystore.password = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.547260125Z level=info msg="Migration successfully executed" id="create data_source table" duration=783.045µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.551289709Z level=info msg="Executing migration" id="add index data_source.account_id" 16:25:57 policy-pap | mariadb (172.17.0.2:3306) open 16:25:57 policy-api | [2024-02-21T16:23:34.611+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 policy-apex-pdp | ssl.keystore.type = JKS 16:25:57 simulator | overriding logback.xml 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.552043314Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=753.625µs 16:25:57 policy-pap | Waiting for kafka port 9092... 16:25:57 policy-api | [2024-02-21T16:23:34.741+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 policy-apex-pdp | ssl.protocol = TLSv1.3 16:25:57 simulator | 2024-02-21 16:23:12,415 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.555157793Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 16:25:57 policy-pap | kafka (172.17.0.6:9092) open 16:25:57 policy-api | [2024-02-21T16:23:34.745+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 policy-apex-pdp | ssl.provider = null 16:25:57 simulator | 2024-02-21 16:23:12,476 INFO org.onap.policy.models.simulators starting 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.555945007Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=786.694µs 16:25:57 policy-pap | Waiting for api port 6969... 16:25:57 policy-api | [2024-02-21T16:23:36.915+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 16:25:57 policy-apex-pdp | ssl.secure.random.implementation = null 16:25:57 simulator | 2024-02-21 16:23:12,477 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.559889572Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 16:25:57 policy-pap | api (172.17.0.9:6969) open 16:25:57 policy-api | [2024-02-21T16:23:36.919+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 16:25:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:25:57 simulator | 2024-02-21 16:23:12,668 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.560645547Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=792.325µs 16:25:57 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 16:25:57 policy-api | [2024-02-21T16:23:37.989+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | 321 blocks 16:25:57 policy-apex-pdp | ssl.truststore.certificates = null 16:25:57 simulator | 2024-02-21 16:23:12,669 INFO org.onap.policy.models.simulators starting A&AI simulator 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.564651961Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 16:25:57 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 16:25:57 policy-api | [2024-02-21T16:23:38.888+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 0800 16:25:57 policy-apex-pdp | ssl.truststore.location = null 16:25:57 simulator | 2024-02-21 16:23:12,781 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.565451996Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=799.695µs 16:25:57 policy-pap | 16:25:57 policy-api | [2024-02-21T16:23:40.133+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 0900 16:25:57 policy-apex-pdp | ssl.truststore.password = null 16:25:57 simulator | 2024-02-21 16:23:12,792 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.568849787Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 16:25:57 policy-pap | . ____ _ __ _ _ 16:25:57 policy-api | [2024-02-21T16:23:40.342+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@78b9d614, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@63f95ac1, org.springframework.security.web.context.SecurityContextHolderFilter@4b7feb38, org.springframework.security.web.header.HeaderWriterFilter@31829b82, org.springframework.security.web.authentication.logout.LogoutFilter@7b1466e3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@31475919, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@700f356b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@712ce4f1, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@14a7e16e, org.springframework.security.web.access.ExceptionTranslationFilter@3005133e, org.springframework.security.web.access.intercept.AuthorizationFilter@285bee4e] 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 1000 16:25:57 policy-apex-pdp | ssl.truststore.type = JKS 16:25:57 simulator | 2024-02-21 16:23:12,794 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.577303889Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.452732ms 16:25:57 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 16:25:57 policy-api | [2024-02-21T16:23:41.192+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 1100 16:25:57 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 simulator | 2024-02-21 16:23:12,814 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.581763527Z level=info msg="Executing migration" id="create data_source table v2" 16:25:57 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 16:25:57 policy-api | [2024-02-21T16:23:41.288+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 1200 16:25:57 policy-apex-pdp | 16:25:57 simulator | 2024-02-21 16:23:12,867 INFO Session workerName=node0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.582623252Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=859.755µs 16:25:57 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 16:25:57 policy-api | [2024-02-21T16:23:41.310+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 16:25:57 kafka | [2024-02-21 16:23:16,085] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Preparing upgrade release version: 1300 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.170+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 simulator | 2024-02-21 16:23:13,502 INFO Using GSON for REST calls 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.585854602Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 16:25:57 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 16:25:57 policy-api | [2024-02-21T16:23:41.326+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.643 seconds (process running for 12.354) 16:25:57 kafka | [2024-02-21 16:23:16,087] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 16:25:57 policy-db-migrator | Done 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.171+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 simulator | 2024-02-21 16:23:13,575 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.586716117Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=861.445µs 16:25:57 policy-pap | =========|_|==============|___/=/_/_/_/ 16:25:57 policy-api | [2024-02-21T16:24:02.640+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 16:25:57 kafka | [2024-02-21 16:23:16,090] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 16:25:57 policy-db-migrator | name version 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.171+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532636170 16:25:57 simulator | 2024-02-21 16:23:13,584 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.590083818Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 16:25:57 policy-pap | :: Spring Boot :: (v3.1.8) 16:25:57 policy-api | [2024-02-21T16:24:02.640+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 16:25:57 kafka | [2024-02-21 16:23:16,096] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 16:25:57 policy-db-migrator | policyadmin 0 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.171+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Subscribed to topic(s): policy-pdp-pap 16:25:57 simulator | 2024-02-21 16:23:13,590 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1703ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.590965443Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=880.825µs 16:25:57 policy-pap | 16:25:57 policy-api | [2024-02-21T16:24:02.642+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 16:25:57 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.172+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=78260019-42ca-4952-996d-a0a6c2bb6a4e, alive=false, publisher=null]]: starting 16:25:57 kafka | [2024-02-21 16:23:16,098] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 16:25:57 simulator | 2024-02-21 16:23:13,591 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4203 ms. 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.594783517Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 16:25:57 policy-pap | [2024-02-21T16:23:44.141+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 16:25:57 policy-api | [2024-02-21T16:24:02.927+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 16:25:57 policy-db-migrator | upgrade: 0 -> 1300 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.185+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:25:57 kafka | [2024-02-21 16:23:16,104] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 16:25:57 simulator | 2024-02-21 16:23:13,605 INFO org.onap.policy.models.simulators starting SDNC simulator 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.59533409Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=550.313µs 16:25:57 policy-pap | [2024-02-21T16:23:44.143+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 16:25:57 policy-api | [] 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | acks = -1 16:25:57 kafka | [2024-02-21 16:23:16,113] INFO Socket connection established, initiating session, client: /172.17.0.6:55254, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 16:25:57 simulator | 2024-02-21 16:23:13,608 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.59851513Z level=info msg="Executing migration" id="Add column with_credentials" 16:25:57 policy-pap | [2024-02-21T16:23:46.112+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 16:25:57 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 16:25:57 policy-apex-pdp | auto.include.jmx.reporter = true 16:25:57 kafka | [2024-02-21 16:23:16,120] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000493b70001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 16:25:57 simulator | 2024-02-21 16:23:13,608 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.602099552Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.582362ms 16:25:57 policy-pap | [2024-02-21T16:23:46.228+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 105 ms. Found 7 JPA repository interfaces. 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | batch.size = 16384 16:25:57 kafka | [2024-02-21 16:23:16,128] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 16:25:57 simulator | 2024-02-21 16:23:13,612 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.605264441Z level=info msg="Executing migration" id="Add secure json data column" 16:25:57 policy-pap | [2024-02-21T16:23:46.662+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 16:25:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:25:57 kafka | [2024-02-21 16:23:16,460] INFO Cluster ID = uKz8K1qZQP67IEMis280Uw (kafka.server.KafkaServer) 16:25:57 simulator | 2024-02-21 16:23:13,612 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.607599176Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.334355ms 16:25:57 policy-pap | [2024-02-21T16:23:46.663+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | buffer.memory = 33554432 16:25:57 kafka | [2024-02-21 16:23:16,463] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 16:25:57 simulator | 2024-02-21 16:23:13,620 INFO Session workerName=node0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.61152553Z level=info msg="Executing migration" id="Update data_source table charset" 16:25:57 policy-pap | [2024-02-21T16:23:47.442+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:25:57 kafka | [2024-02-21 16:23:16,514] INFO KafkaConfig values: 16:25:57 simulator | 2024-02-21 16:23:13,694 INFO Using GSON for REST calls 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.611551381Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.491µs 16:25:57 policy-pap | [2024-02-21T16:23:47.456+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | client.id = producer-1 16:25:57 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 16:25:57 simulator | 2024-02-21 16:23:13,705 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.615082361Z level=info msg="Executing migration" id="Update initial version to 1" 16:25:57 policy-pap | [2024-02-21T16:23:47.460+00:00|INFO|StandardService|main] Starting service [Tomcat] 16:25:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 16:25:57 policy-apex-pdp | compression.type = none 16:25:57 kafka | alter.config.policy.class.name = null 16:25:57 simulator | 2024-02-21 16:23:13,706 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.615260843Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=170.811µs 16:25:57 policy-pap | [2024-02-21T16:23:47.460+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 16:25:57 kafka | alter.log.dirs.replication.quota.window.num = 11 16:25:57 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 16:25:57 kafka | authorizer.class.name = 16:25:57 simulator | 2024-02-21 16:23:13,707 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1819ms 16:25:57 policy-apex-pdp | connections.max.idle.ms = 540000 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:23:47.574+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 16:25:57 kafka | auto.create.topics.enable = true 16:25:57 kafka | auto.include.jmx.reporter = true 16:25:57 policy-apex-pdp | delivery.timeout.ms = 120000 16:25:57 simulator | 2024-02-21 16:23:13,707 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4905 ms. 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:23:47.575+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3337 ms 16:25:57 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.618489723Z level=info msg="Executing migration" id="Add read_only data column" 16:25:57 policy-apex-pdp | enable.idempotence = true 16:25:57 simulator | 2024-02-21 16:23:13,708 INFO org.onap.policy.models.simulators starting SO simulator 16:25:57 kafka | auto.leader.rebalance.enable = true 16:25:57 policy-pap | [2024-02-21T16:23:48.127+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.620788857Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.298984ms 16:25:57 policy-apex-pdp | interceptor.classes = [] 16:25:57 simulator | 2024-02-21 16:23:13,710 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:25:57 kafka | background.threads = 10 16:25:57 policy-pap | [2024-02-21T16:23:48.254+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.624044887Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 16:25:57 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 simulator | 2024-02-21 16:23:13,711 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 kafka | broker.heartbeat.interval.ms = 2000 16:25:57 policy-pap | [2024-02-21T16:23:48.258+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.624216518Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=169.891µs 16:25:57 policy-apex-pdp | linger.ms = 0 16:25:57 simulator | 2024-02-21 16:23:13,712 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 kafka | broker.id = 1 16:25:57 policy-pap | [2024-02-21T16:23:48.311+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.628206883Z level=info msg="Executing migration" id="Update json_data with nulls" 16:25:57 policy-apex-pdp | max.block.ms = 60000 16:25:57 simulator | 2024-02-21 16:23:13,712 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:25:57 kafka | broker.id.generation.enable = true 16:25:57 policy-pap | [2024-02-21T16:23:48.686+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.628361034Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=152.111µs 16:25:57 policy-apex-pdp | max.in.flight.requests.per.connection = 5 16:25:57 simulator | 2024-02-21 16:23:13,720 INFO Session workerName=node0 16:25:57 kafka | broker.rack = null 16:25:57 policy-pap | [2024-02-21T16:23:48.708+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.631382952Z level=info msg="Executing migration" id="Add uid column" 16:25:57 policy-apex-pdp | max.request.size = 1048576 16:25:57 simulator | 2024-02-21 16:23:13,776 INFO Using GSON for REST calls 16:25:57 kafka | broker.session.timeout.ms = 9000 16:25:57 policy-pap | [2024-02-21T16:23:48.839+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 16:25:57 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.633609966Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.227014ms 16:25:57 policy-apex-pdp | metadata.max.age.ms = 300000 16:25:57 simulator | 2024-02-21 16:23:13,789 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 16:25:57 kafka | client.quota.callback.class = null 16:25:57 policy-pap | [2024-02-21T16:23:48.841+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.636823846Z level=info msg="Executing migration" id="Update uid value" 16:25:57 policy-apex-pdp | metadata.max.idle.ms = 300000 16:25:57 simulator | 2024-02-21 16:23:13,790 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 16:25:57 kafka | compression.type = producer 16:25:57 policy-pap | [2024-02-21T16:23:50.877+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.637002446Z level=info msg="Migration successfully executed" id="Update uid value" duration=178.09µs 16:25:57 policy-apex-pdp | metric.reporters = [] 16:25:57 simulator | 2024-02-21 16:23:13,790 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1903ms 16:25:57 kafka | connection.failed.authentication.delay.ms = 100 16:25:57 policy-pap | [2024-02-21T16:23:50.882+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.640979711Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 16:25:57 policy-apex-pdp | metrics.num.samples = 2 16:25:57 simulator | 2024-02-21 16:23:13,790 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. 16:25:57 kafka | connections.max.idle.ms = 600000 16:25:57 policy-pap | [2024-02-21T16:23:51.439+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.641847537Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=867.206µs 16:25:57 policy-apex-pdp | metrics.recording.level = INFO 16:25:57 simulator | 2024-02-21 16:23:13,791 INFO org.onap.policy.models.simulators starting VFC simulator 16:25:57 kafka | connections.max.reauth.ms = 0 16:25:57 policy-pap | [2024-02-21T16:23:51.867+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.645003426Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 16:25:57 policy-apex-pdp | metrics.sample.window.ms = 30000 16:25:57 simulator | 2024-02-21 16:23:13,795 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:25:57 kafka | control.plane.listener.name = null 16:25:57 policy-pap | [2024-02-21T16:23:51.960+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 16:25:57 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.645904861Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=841.135µs 16:25:57 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 16:25:57 simulator | 2024-02-21 16:23:13,796 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 kafka | controlled.shutdown.enable = true 16:25:57 policy-pap | [2024-02-21T16:23:52.270+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.64902022Z level=info msg="Executing migration" id="create api_key table" 16:25:57 policy-apex-pdp | partitioner.availability.timeout.ms = 0 16:25:57 simulator | 2024-02-21 16:23:13,797 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 kafka | controlled.shutdown.max.retries = 3 16:25:57 policy-pap | allow.auto.create.topics = true 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.649698265Z level=info msg="Migration successfully executed" id="create api_key table" duration=675.185µs 16:25:57 policy-apex-pdp | partitioner.class = null 16:25:57 simulator | 2024-02-21 16:23:13,797 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:25:57 kafka | controlled.shutdown.retry.backoff.ms = 5000 16:25:57 policy-pap | auto.commit.interval.ms = 5000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.653700219Z level=info msg="Executing migration" id="add index api_key.account_id" 16:25:57 policy-apex-pdp | partitioner.ignore.keys = false 16:25:57 simulator | 2024-02-21 16:23:13,802 INFO Session workerName=node0 16:25:57 kafka | controller.listener.names = null 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.654510394Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=809.815µs 16:25:57 policy-apex-pdp | receive.buffer.bytes = 32768 16:25:57 simulator | 2024-02-21 16:23:13,846 INFO Using GSON for REST calls 16:25:57 kafka | controller.quorum.append.linger.ms = 25 16:25:57 policy-pap | auto.offset.reset = latest 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.657629923Z level=info msg="Executing migration" id="add index api_key.key" 16:25:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:25:57 simulator | 2024-02-21 16:23:13,855 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 16:25:57 kafka | controller.quorum.election.backoff.max.ms = 1000 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.658458489Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=828.716µs 16:25:57 policy-apex-pdp | reconnect.backoff.ms = 50 16:25:57 simulator | 2024-02-21 16:23:13,856 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 16:25:57 kafka | controller.quorum.election.timeout.ms = 1000 16:25:57 policy-pap | check.crcs = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.66186171Z level=info msg="Executing migration" id="add index api_key.account_id_name" 16:25:57 policy-apex-pdp | request.timeout.ms = 30000 16:25:57 simulator | 2024-02-21 16:23:13,856 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1968ms 16:25:57 kafka | controller.quorum.fetch.timeout.ms = 2000 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.662668864Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=806.714µs 16:25:57 policy-apex-pdp | retries = 2147483647 16:25:57 kafka | controller.quorum.request.timeout.ms = 2000 16:25:57 simulator | 2024-02-21 16:23:13,856 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4940 ms. 16:25:57 simulator | 2024-02-21 16:23:13,857 INFO org.onap.policy.models.simulators started 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.668753332Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 16:25:57 policy-apex-pdp | retry.backoff.ms = 100 16:25:57 kafka | controller.quorum.retry.backoff.ms = 20 16:25:57 policy-pap | client.id = consumer-66b9586c-d4bb-4933-993d-6431c832b08c-1 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.669473227Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=719.524µs 16:25:57 policy-apex-pdp | sasl.client.callback.handler.class = null 16:25:57 kafka | controller.quorum.voters = [] 16:25:57 policy-pap | client.rack = 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.672457415Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 16:25:57 policy-apex-pdp | sasl.jaas.config = null 16:25:57 kafka | controller.quota.window.num = 11 16:25:57 policy-pap | default.api.timeout.ms = 60000 16:25:57 policy-pap | enable.auto.commit = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.67315972Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=701.755µs 16:25:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 kafka | controller.quota.window.size.seconds = 1 16:25:57 policy-pap | exclude.internal.topics = true 16:25:57 policy-pap | fetch.max.bytes = 52428800 16:25:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.676253177Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.676950712Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=697.105µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | fetch.max.wait.ms = 500 16:25:57 policy-apex-pdp | sasl.kerberos.service.name = null 16:25:57 kafka | controller.socket.timeout.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.680989098Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 16:25:57 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 16:25:57 policy-pap | fetch.min.bytes = 1 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 kafka | create.topic.policy.class.name = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.689735701Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.745063ms 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | group.id = 66b9586c-d4bb-4933-993d-6431c832b08c 16:25:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 kafka | default.replication.factor = 1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.693125872Z level=info msg="Executing migration" id="create api_key table v2" 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | group.instance.id = null 16:25:57 policy-apex-pdp | sasl.login.callback.handler.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.693606245Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=480.834µs 16:25:57 policy-pap | heartbeat.interval.ms = 3000 16:25:57 policy-apex-pdp | sasl.login.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.697527329Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 16:25:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.698133033Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=605.704µs 16:25:57 policy-pap | internal.leave.group.on.close = true 16:25:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.700405377Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 16:25:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:25:57 kafka | delegation.token.expiry.check.interval.ms = 3600000 16:25:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 kafka | delegation.token.expiry.time.ms = 86400000 16:25:57 policy-pap | isolation.level = read_uncommitted 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.701140752Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=735.155µs 16:25:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:25:57 kafka | delegation.token.master.key = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.704127779Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 16:25:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.704917334Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=789.295µs 16:25:57 policy-pap | max.partition.fetch.bytes = 1048576 16:25:57 kafka | delegation.token.max.lifetime.ms = 604800000 16:25:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-pap | max.poll.interval.ms = 300000 16:25:57 kafka | delegation.token.secret.key = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.708939109Z level=info msg="Executing migration" id="copy api_key v1 to v2" 16:25:57 policy-pap | max.poll.records = 500 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 kafka | delete.records.purgatory.purge.interval.requests = 1 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-pap | metric.reporters = [] 16:25:57 kafka | delete.topic.enable = true 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 16:25:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 kafka | early.start.listeners = null 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 16:25:57 policy-apex-pdp | sasl.mechanism = GSSAPI 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 kafka | fetch.max.bytes = 57671680 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 kafka | fetch.purgatory.purge.interval.requests = 1000 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:25:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:25:57 policy-pap | receive.buffer.bytes = 65536 16:25:57 kafka | group.consumer.heartbeat.interval.ms = 5000 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 kafka | group.consumer.max.heartbeat.interval.ms = 15000 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 kafka | group.consumer.max.session.timeout.ms = 60000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 kafka | group.consumer.max.size = 2147483647 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 kafka | group.consumer.min.heartbeat.interval.ms = 5000 16:25:57 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 kafka | group.consumer.min.session.timeout.ms = 45000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 kafka | group.consumer.session.timeout.ms = 45000 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 kafka | group.coordinator.new.enable = false 16:25:57 policy-apex-pdp | security.protocol = PLAINTEXT 16:25:57 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 kafka | group.coordinator.threads = 1 16:25:57 policy-apex-pdp | security.providers = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.709302552Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=363.083µs 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 kafka | group.initial.rebalance.delay.ms = 3000 16:25:57 policy-apex-pdp | send.buffer.bytes = 131072 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.713379636Z level=info msg="Executing migration" id="Drop old table api_key_v1" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.71395809Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=576.474µs 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 kafka | group.max.session.timeout.ms = 1800000 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.717827074Z level=info msg="Executing migration" id="Update api_key table charset" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.717887435Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=58.281µs 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 kafka | group.max.size = 2147483647 16:25:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.721904789Z level=info msg="Executing migration" id="Add expires to api_key table" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.724320864Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.465845ms 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 kafka | group.min.session.timeout.ms = 6000 16:25:57 policy-apex-pdp | ssl.cipher.suites = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.729177924Z level=info msg="Executing migration" id="Add service account foreign key" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.73176654Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.587786ms 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 kafka | initial.broker.registration.timeout.ms = 60000 16:25:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.73506417Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.735225821Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=163.921µs 16:25:57 policy-pap | sasl.login.class = null 16:25:57 kafka | inter.broker.listener.name = PLAINTEXT 16:25:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.738536331Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-db-migrator | 16:25:57 kafka | inter.broker.protocol.version = 3.6-IV2 16:25:57 policy-apex-pdp | ssl.engine.factory.class = null 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.741101687Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.564566ms 16:25:57 kafka | kafka.metrics.polling.interval.secs = 10 16:25:57 policy-apex-pdp | ssl.key.password = null 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.745509195Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 16:25:57 kafka | kafka.metrics.reporters = [] 16:25:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.748325272Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.813827ms 16:25:57 kafka | leader.imbalance.check.interval.seconds = 300 16:25:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.751351211Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 16:25:57 kafka | leader.imbalance.per.broker.percentage = 10 16:25:57 policy-apex-pdp | ssl.keystore.key = null 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.752087184Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=736.123µs 16:25:57 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 16:25:57 policy-apex-pdp | ssl.keystore.location = null 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.755132193Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 16:25:57 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 16:25:57 policy-apex-pdp | ssl.keystore.password = null 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.755728677Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=595.484µs 16:25:57 kafka | log.cleaner.backoff.ms = 15000 16:25:57 policy-apex-pdp | ssl.keystore.type = JKS 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.759808993Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 16:25:57 kafka | log.cleaner.dedupe.buffer.size = 134217728 16:25:57 policy-apex-pdp | ssl.protocol = TLSv1.3 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.760523636Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=712.573µs 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-apex-pdp | ssl.provider = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.763654906Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 policy-apex-pdp | ssl.secure.random.implementation = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.764695243Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.040187ms 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.768011553Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-apex-pdp | ssl.truststore.certificates = null 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 kafka | log.cleaner.delete.retention.ms = 86400000 16:25:57 policy-apex-pdp | ssl.truststore.location = null 16:25:57 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.771214593Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=3.20269ms 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-apex-pdp | ssl.truststore.password = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.776346664Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 policy-pap | security.providers = null 16:25:57 policy-apex-pdp | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.777156519Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=809.855µs 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 policy-pap | session.timeout.ms = 45000 16:25:57 policy-apex-pdp | transaction.timeout.ms = 60000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.78050818Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-apex-pdp | transactional.id = null 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.78057703Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=71.51µs 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 kafka | log.cleaner.enable = true 16:25:57 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.783678849Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 kafka | log.cleaner.io.buffer.load.factor = 0.9 16:25:57 policy-apex-pdp | 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.783707259Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.5µs 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 kafka | log.cleaner.io.buffer.size = 524288 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.194+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.78699857Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 policy-pap | ssl.key.password = null 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.789682676Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.682556ms 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.211+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.794819178Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 policy-db-migrator | 16:25:57 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 16:25:57 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.211+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.797613135Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.793847ms 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 policy-db-migrator | 16:25:57 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 16:25:57 kafka | log.cleaner.min.cleanable.ratio = 0.5 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.800896975Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.212+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532636211 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.212+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=78260019-42ca-4952-996d-a0a6c2bb6a4e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:25:57 kafka | log.cleaner.min.compaction.lag.ms = 0 16:25:57 kafka | log.cleaner.threads = 1 16:25:57 kafka | log.cleanup.policy = [delete] 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.212+00:00|INFO|ServiceManager|main] service manager starting set alive 16:25:57 kafka | log.dir = /tmp/kafka-logs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.800966376Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=65.041µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.213+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 16:25:57 kafka | log.dirs = /var/lib/kafka/data 16:25:57 kafka | log.flush.interval.messages = 9223372036854775807 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.805096711Z level=info msg="Executing migration" id="create quota table v1" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | log.flush.interval.ms = null 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.215+00:00|INFO|ServiceManager|main] service manager starting topic sinks 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.806027267Z level=info msg="Migration successfully executed" id="create quota table v1" duration=937.307µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.provider = null 16:25:57 kafka | log.flush.offset.checkpoint.interval.ms = 60000 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.215+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.811290259Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.218+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 16:25:57 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.218+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.812184335Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=890.726µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 kafka | log.index.interval.bytes = 4096 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.218+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.815263294Z level=info msg="Executing migration" id="Update quota table charset" 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 kafka | log.index.size.max.bytes = 10485760 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.219+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8aee6ac5-f217-4030-aeed-72326ff1d45e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.815289884Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.58µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 kafka | log.local.retention.bytes = -2 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.219+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8aee6ac5-f217-4030-aeed-72326ff1d45e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.818380803Z level=info msg="Executing migration" id="create plugin_setting table" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 kafka | log.local.retention.ms = -2 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.219+00:00|INFO|ServiceManager|main] service manager starting Create REST server 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.818887375Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=504.682µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 kafka | log.message.downconversion.enable = true 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.229+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.823763516Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 16:25:57 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 16:25:57 policy-pap | 16:25:57 kafka | log.message.format.version = 3.0-IV1 16:25:57 policy-apex-pdp | [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.82434101Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=576.904µs 16:25:57 policy-pap | [2024-02-21T16:23:52.429+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.231+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.827459498Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 16:25:57 policy-pap | [2024-02-21T16:23:52.429+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"51957d2b-eccd-47ee-89ec-244b76467cf4","timestampMs":1708532636218,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.829522702Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.064404ms 16:25:57 policy-pap | [2024-02-21T16:23:52.429+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532632427 16:25:57 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.397+00:00|INFO|ServiceManager|main] service manager starting Rest Server 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.832620801Z level=info msg="Executing migration" id="Update plugin_setting table charset" 16:25:57 policy-pap | [2024-02-21T16:23:52.431+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-1, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Subscribed to topic(s): policy-pdp-pap 16:25:57 kafka | log.message.timestamp.type = CreateTime 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.398+00:00|INFO|ServiceManager|main] service manager starting 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.832639111Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=18.78µs 16:25:57 policy-pap | [2024-02-21T16:23:52.432+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 kafka | log.preallocate = false 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.398+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.835801779Z level=info msg="Executing migration" id="create session table" 16:25:57 policy-pap | allow.auto.create.topics = true 16:25:57 kafka | log.retention.bytes = -1 16:25:57 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.398+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.836365373Z level=info msg="Migration successfully executed" id="create session table" duration=563.484µs 16:25:57 policy-pap | auto.commit.interval.ms = 5000 16:25:57 kafka | log.retention.check.interval.ms = 300000 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.408+00:00|INFO|ServiceManager|main] service manager started 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.843119455Z level=info msg="Executing migration" id="Drop old table playlist table" 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 kafka | log.retention.hours = 168 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.408+00:00|INFO|ServiceManager|main] service manager started 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.843260306Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=142.711µs 16:25:57 policy-pap | auto.offset.reset = latest 16:25:57 kafka | log.retention.minutes = null 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.409+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.846727258Z level=info msg="Executing migration" id="Drop old table playlist_item table" 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 kafka | log.retention.ms = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.846865597Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=142.059µs 16:25:57 policy-pap | check.crcs = true 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.408+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:25:57 kafka | log.roll.hours = 168 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.850069177Z level=info msg="Executing migration" id="create playlist table v2" 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.545+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 kafka | log.roll.jitter.hours = 0 16:25:57 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.851136744Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.059477ms 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.545+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 kafka | log.roll.jitter.ms = null 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | client.id = consumer-policy-pap-2 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.546+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:25:57 kafka | log.roll.ms = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 16:25:57 policy-pap | client.rack = 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.546+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 16:25:57 kafka | log.segment.bytes = 1073741824 16:25:57 policy-db-migrator | 16:25:57 policy-pap | default.api.timeout.ms = 60000 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] (Re-)joining group 16:25:57 kafka | log.segment.delete.delay.ms = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.857110041Z level=info msg="Executing migration" id="create playlist item table v2" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | enable.auto.commit = true 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Request joining group due to: need to re-join with the given member-id: consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6 16:25:57 kafka | max.connection.creation.rate = 2147483647 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.858822671Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.71275ms 16:25:57 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 16:25:57 policy-pap | exclude.internal.topics = true 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:25:57 kafka | max.connections = 2147483647 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.863792602Z level=info msg="Executing migration" id="Update playlist table charset" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | fetch.max.bytes = 52428800 16:25:57 policy-apex-pdp | [2024-02-21T16:23:56.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] (Re-)joining group 16:25:57 kafka | max.connections.per.ip = 2147483647 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.863862482Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=74.74µs 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | fetch.max.wait.ms = 500 16:25:57 policy-apex-pdp | [2024-02-21T16:23:57.076+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 16:25:57 kafka | max.connections.per.ip.overrides = 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.867227633Z level=info msg="Executing migration" id="Update playlist_item table charset" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | fetch.min.bytes = 1 16:25:57 policy-apex-pdp | [2024-02-21T16:23:57.077+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 16:25:57 kafka | max.incremental.fetch.session.cache.slots = 1000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.867258683Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=32.05µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | group.id = policy-pap 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6', protocol='range'} 16:25:57 kafka | message.max.bytes = 1048588 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.872540566Z level=info msg="Executing migration" id="Add playlist column created_at" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | group.instance.id = null 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.581+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Finished assignment for group at generation 1: {consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6=Assignment(partitions=[policy-pdp-pap-0])} 16:25:57 kafka | metadata.log.dir = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.875595984Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.055238ms 16:25:57 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 16:25:57 policy-pap | heartbeat.interval.ms = 3000 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6', protocol='range'} 16:25:57 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.878733864Z level=info msg="Executing migration" id="Add playlist column updated_at" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:25:57 kafka | metadata.log.max.snapshot.interval.ms = 3600000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.882094045Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.359911ms 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.591+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Adding newly assigned partitions: policy-pdp-pap-0 16:25:57 kafka | metadata.log.segment.bytes = 1073741824 16:25:57 policy-pap | internal.leave.group.on.close = true 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.599+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Found no committed offset for partition policy-pdp-pap-0 16:25:57 kafka | metadata.log.segment.min.bytes = 8388608 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.885536306Z level=info msg="Executing migration" id="drop preferences table v2" 16:25:57 policy-db-migrator | 16:25:57 kafka | metadata.log.segment.ms = 604800000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.885614077Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=78.301µs 16:25:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 policy-db-migrator | 16:25:57 kafka | metadata.max.idle.interval.ms = 500 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.888709766Z level=info msg="Executing migration" id="drop preferences table v3" 16:25:57 policy-pap | isolation.level = read_uncommitted 16:25:57 policy-apex-pdp | [2024-02-21T16:23:59.619+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2, groupId=8aee6ac5-f217-4030-aeed-72326ff1d45e] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:25:57 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 16:25:57 kafka | metadata.max.retention.bytes = 104857600 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.888789896Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.96µs 16:25:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.218+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | metadata.max.retention.ms = 604800000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.894891553Z level=info msg="Executing migration" id="create preferences table v3" 16:25:57 policy-pap | max.partition.fetch.bytes = 1048576 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a37b5cb3-4e17-4754-961b-0ca37490e58f","timestampMs":1708532656218,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 16:25:57 kafka | metric.reporters = [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.896048241Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.156608ms 16:25:57 policy-pap | max.poll.interval.ms = 300000 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.242+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | metrics.num.samples = 2 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.90077133Z level=info msg="Executing migration" id="Update preferences table charset" 16:25:57 policy-pap | max.poll.records = 500 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a37b5cb3-4e17-4754-961b-0ca37490e58f","timestampMs":1708532656218,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 policy-db-migrator | 16:25:57 kafka | metrics.recording.level = INFO 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.90081013Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=40.16µs 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.246+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:25:57 policy-db-migrator | 16:25:57 kafka | metrics.sample.window.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.904189331Z level=info msg="Executing migration" id="Add column team_id in preferences" 16:25:57 policy-pap | metric.reporters = [] 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.415+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 16:25:57 kafka | min.insync.replicas = 1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.909562344Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.371663ms 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 policy-apex-pdp | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | node.id = 1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.914249282Z level=info msg="Executing migration" id="Update team_id column values in preferences" 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.423+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 16:25:57 kafka | num.io.threads = 8 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.914419904Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=167.662µs 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fbba62bc-ce73-4500-9710-efcf835e3651","timestampMs":1708532656423,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | num.network.threads = 3 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.923742851Z level=info msg="Executing migration" id="Add column week_start in preferences" 16:25:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.423+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.926869021Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.12798ms 16:25:57 policy-pap | receive.buffer.bytes = 65536 16:25:57 kafka | num.partitions = 1 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.424+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 kafka | num.recovery.threads.per.data.dir = 1 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0c59c656-c7d8-4254-9f3e-eca6fc98c068","timestampMs":1708532656424,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.930415393Z level=info msg="Executing migration" id="Add column preferences.json_data" 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 kafka | num.replica.alter.log.dirs.threads = null 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.454+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.933466391Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.050598ms 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 kafka | num.replica.fetchers = 1 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fbba62bc-ce73-4500-9710-efcf835e3651","timestampMs":1708532656423,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.937118473Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 kafka | offset.metadata.max.bytes = 4096 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.454+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.937182394Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=63.921µs 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 kafka | offsets.commit.required.acks = -1 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.461+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.940410274Z level=info msg="Executing migration" id="Add preferences index org_id" 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 kafka | offsets.commit.timeout.ms = 5000 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0c59c656-c7d8-4254-9f3e-eca6fc98c068","timestampMs":1708532656424,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.941317309Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=907.186µs 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 kafka | offsets.load.buffer.size = 5242880 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.463+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.945769157Z level=info msg="Executing migration" id="Add preferences index user_id" 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 kafka | offsets.retention.check.interval.ms = 600000 16:25:57 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.503+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.947169856Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.400679ms 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 kafka | offsets.retention.minutes = 10080 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.950808637Z level=info msg="Executing migration" id="create alert table v1" 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 kafka | offsets.topic.compression.codec = 0 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.506+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.952315877Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.50652ms 16:25:57 kafka | offsets.topic.num.partitions = 50 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d9af465b-121e-4aca-a001-a758b96d7663","timestampMs":1708532656506,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.957198407Z level=info msg="Executing migration" id="add index alert org_id & id " 16:25:57 kafka | offsets.topic.replication.factor = 1 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.517+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-pap | sasl.login.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.958832397Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.63498ms 16:25:57 kafka | offsets.topic.segment.bytes = 104857600 16:25:57 policy-db-migrator | 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d9af465b-121e-4aca-a001-a758b96d7663","timestampMs":1708532656506,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.96264175Z level=info msg="Executing migration" id="add index alert state" 16:25:57 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 16:25:57 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.517+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.963445846Z level=info msg="Migration successfully executed" id="add index alert state" duration=804.056µs 16:25:57 kafka | password.encoder.iterations = 4096 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.550+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.980132338Z level=info msg="Executing migration" id="add index alert dashboard_id" 16:25:57 kafka | password.encoder.key.length = 128 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.980749642Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=618.034µs 16:25:57 kafka | password.encoder.keyfactory.algorithm = null 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.985112899Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 16:25:57 kafka | password.encoder.old.secret = null 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.985788923Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=676.814µs 16:25:57 kafka | password.encoder.secret = null 16:25:57 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.989553657Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 16:25:57 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.990484942Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=931.095µs 16:25:57 kafka | process.roles = [] 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.994074604Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 16:25:57 kafka | producer.id.expiration.check.interval.ms = 600000 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:14.99498348Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=908.976µs 16:25:57 kafka | producer.id.expiration.ms = 86400000 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.00310344Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 16:25:57 kafka | producer.purgatory.purge.interval.requests = 1000 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.017789766Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.684576ms 16:25:57 kafka | queued.max.request.bytes = -1 16:25:57 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.022856677Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 16:25:57 kafka | queued.max.requests = 500 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.023473023Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=616.636µs 16:25:57 kafka | quota.window.num = 11 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.026842336Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 16:25:57 kafka | quota.window.size.seconds = 1 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.027704135Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=861.809µs 16:25:57 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.033647464Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 16:25:57 kafka | remote.log.manager.task.interval.ms = 30000 16:25:57 policy-apex-pdp | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","timestampMs":1708532656527,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.033915907Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=268.383µs 16:25:57 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.552+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.037100948Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 16:25:57 kafka | remote.log.manager.task.retry.backoff.ms = 500 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"231be759-d5b8-4dae-bd16-94f291e82d6d","timestampMs":1708532656552,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.038084177Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=983.059µs 16:25:57 kafka | remote.log.manager.task.retry.jitter = 0.2 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.560+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 16:25:57 policy-pap | security.providers = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.041466231Z level=info msg="Executing migration" id="create alert_notification table v1" 16:25:57 kafka | remote.log.manager.thread.pool.size = 10 16:25:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"231be759-d5b8-4dae-bd16-94f291e82d6d","timestampMs":1708532656552,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.042107928Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=641.697µs 16:25:57 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 16:25:57 policy-apex-pdp | [2024-02-21T16:24:16.561+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:25:57 policy-db-migrator | 16:25:57 policy-pap | session.timeout.ms = 45000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.046799044Z level=info msg="Executing migration" id="Add column is_default" 16:25:57 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 16:25:57 policy-apex-pdp | [2024-02-21T16:24:56.157+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.4 - policyadmin [21/Feb/2024:16:24:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.49.1" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.053178087Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.396474ms 16:25:57 kafka | remote.log.metadata.manager.class.path = null 16:25:57 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.057673011Z level=info msg="Executing migration" id="Add column frequency" 16:25:57 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.060160286Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.488035ms 16:25:57 kafka | remote.log.metadata.manager.listener.name = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.065892132Z level=info msg="Executing migration" id="Add column send_reminder" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 kafka | remote.log.reader.max.pending.tasks = 100 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.069565509Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.677447ms 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 kafka | remote.log.reader.threads = 10 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.073504288Z level=info msg="Executing migration" id="Add column disable_resolve_message" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.key.password = null 16:25:57 kafka | remote.log.storage.manager.class.name = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.076975252Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.469084ms 16:25:57 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 kafka | remote.log.storage.manager.class.path = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.08171538Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.082667938Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=952.048µs 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 kafka | remote.log.storage.system.enable = false 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.08583797Z level=info msg="Executing migration" id="Update alert table charset" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 kafka | replica.fetch.backoff.ms = 1000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.08586395Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.18µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 kafka | replica.fetch.max.bytes = 1048576 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.089419646Z level=info msg="Executing migration" id="Update alert_notification table charset" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 kafka | replica.fetch.min.bytes = 1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.089446906Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=28.16µs 16:25:57 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 16:25:57 kafka | replica.fetch.response.max.bytes = 10485760 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.094038751Z level=info msg="Executing migration" id="create notification_journal table v1" 16:25:57 policy-pap | ssl.provider = null 16:25:57 kafka | replica.fetch.wait.max.ms = 500 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.096404994Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=2.320463ms 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.101123052Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 kafka | replica.lag.time.max.ms = 30000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.102863868Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.744056ms 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 kafka | replica.selector.class = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.106348723Z level=info msg="Executing migration" id="drop alert_notification_journal" 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 kafka | replica.socket.receive.buffer.bytes = 65536 16:25:57 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.107652216Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.303262ms 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 kafka | replica.socket.timeout.ms = 30000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.112818616Z level=info msg="Executing migration" id="create alert_notification_state table v1" 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 kafka | replication.quota.window.num = 11 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.113588094Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=768.988µs 16:25:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 kafka | replication.quota.window.size.seconds = 1 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.119007918Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 16:25:57 policy-pap | 16:25:57 kafka | request.timeout.ms = 30000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.119920457Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=911.899µs 16:25:57 policy-pap | [2024-02-21T16:23:52.437+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 kafka | reserved.broker.max.id = 1000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.123785536Z level=info msg="Executing migration" id="Add for to alert table" 16:25:57 policy-pap | [2024-02-21T16:23:52.437+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 kafka | sasl.client.callback.handler.class = null 16:25:57 policy-db-migrator | > upgrade 0450-pdpgroup.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.135082908Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=11.291762ms 16:25:57 policy-pap | [2024-02-21T16:23:52.437+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532632437 16:25:57 kafka | sasl.enabled.mechanisms = [GSSAPI] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.139482771Z level=info msg="Executing migration" id="Add column uid in alert_notification" 16:25:57 policy-pap | [2024-02-21T16:23:52.438+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 16:25:57 kafka | sasl.jaas.config = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.146611111Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=7.12715ms 16:25:57 policy-pap | [2024-02-21T16:23:52.823+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 16:25:57 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.150915864Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 16:25:57 policy-pap | [2024-02-21T16:23:53.047+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 16:25:57 kafka | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.151150706Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=231.482µs 16:25:57 policy-pap | [2024-02-21T16:23:53.377+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] 16:25:57 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.154199857Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 16:25:57 policy-pap | [2024-02-21T16:23:54.316+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 16:25:57 kafka | sasl.kerberos.service.name = null 16:25:57 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.155267307Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.06692ms 16:25:57 policy-pap | [2024-02-21T16:23:54.419+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 16:25:57 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.160538049Z level=info msg="Executing migration" id="Remove unique index org_id_name" 16:25:57 policy-pap | [2024-02-21T16:23:54.446+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 16:25:57 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.16163816Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.100871ms 16:25:57 policy-pap | [2024-02-21T16:23:54.464+00:00|INFO|ServiceManager|main] Policy PAP starting 16:25:57 kafka | sasl.login.callback.handler.class = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.167178575Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 16:25:57 policy-pap | [2024-02-21T16:23:54.464+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 16:25:57 kafka | sasl.login.class = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.170010753Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.833618ms 16:25:57 policy-pap | [2024-02-21T16:23:54.465+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 16:25:57 kafka | sasl.login.connect.timeout.ms = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.173588248Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 16:25:57 policy-pap | [2024-02-21T16:23:54.465+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 16:25:57 kafka | sasl.login.read.timeout.ms = null 16:25:57 policy-db-migrator | > upgrade 0470-pdp.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.173654229Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=68.031µs 16:25:57 policy-pap | [2024-02-21T16:23:54.466+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.17878448Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 16:25:57 kafka | sasl.login.refresh.buffer.seconds = 300 16:25:57 policy-pap | [2024-02-21T16:23:54.466+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.179719979Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=935.889µs 16:25:57 kafka | sasl.login.refresh.min.period.seconds = 60 16:25:57 policy-pap | [2024-02-21T16:23:54.466+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.184678818Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 16:25:57 kafka | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-pap | [2024-02-21T16:23:54.470+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=66b9586c-d4bb-4933-993d-6431c832b08c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cf4d454 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.185551187Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=872.629µs 16:25:57 kafka | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-pap | [2024-02-21T16:23:54.481+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=66b9586c-d4bb-4933-993d-6431c832b08c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.188490675Z level=info msg="Executing migration" id="Drop old annotation table v4" 16:25:57 kafka | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-pap | [2024-02-21T16:23:54.481+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.188571976Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=81.811µs 16:25:57 kafka | sasl.login.retry.backoff.ms = 100 16:25:57 policy-pap | allow.auto.create.topics = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.191683807Z level=info msg="Executing migration" id="create annotation table v5" 16:25:57 kafka | sasl.mechanism.controller.protocol = GSSAPI 16:25:57 policy-pap | auto.commit.interval.ms = 5000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.192510635Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=826.528µs 16:25:57 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.19801672Z level=info msg="Executing migration" id="add index annotation 0 v3" 16:25:57 kafka | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-pap | auto.offset.reset = latest 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.19896027Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=944.2µs 16:25:57 kafka | sasl.oauthbearer.expected.audience = null 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.201878299Z level=info msg="Executing migration" id="add index annotation 1 v3" 16:25:57 kafka | sasl.oauthbearer.expected.issuer = null 16:25:57 policy-pap | check.crcs = true 16:25:57 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.202753766Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=876.368µs 16:25:57 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.205709136Z level=info msg="Executing migration" id="add index annotation 2 v3" 16:25:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-pap | client.id = consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.206695286Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=985.84µs 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-pap | client.rack = 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.211782446Z level=info msg="Executing migration" id="add index annotation 3 v3" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.212632085Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=851.339µs 16:25:57 policy-db-migrator | 16:25:57 kafka | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-pap | default.api.timeout.ms = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.216983217Z level=info msg="Executing migration" id="add index annotation 4 v3" 16:25:57 policy-db-migrator | 16:25:57 kafka | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-pap | enable.auto.commit = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.218043928Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.061711ms 16:25:57 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 16:25:57 kafka | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-pap | exclude.internal.topics = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.220982177Z level=info msg="Executing migration" id="Update annotation table charset" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | sasl.server.callback.handler.class = null 16:25:57 policy-pap | fetch.max.bytes = 52428800 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.221010708Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=31.851µs 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.226079188Z level=info msg="Executing migration" id="Add column region_id to annotation table" 16:25:57 kafka | sasl.server.max.receive.size = 524288 16:25:57 policy-pap | fetch.max.wait.ms = 500 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.23044188Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.361942ms 16:25:57 kafka | security.inter.broker.protocol = PLAINTEXT 16:25:57 policy-pap | fetch.min.bytes = 1 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.233627642Z level=info msg="Executing migration" id="Drop category_id index" 16:25:57 kafka | security.providers = null 16:25:57 policy-pap | group.id = 66b9586c-d4bb-4933-993d-6431c832b08c 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.234526461Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=899.759µs 16:25:57 kafka | server.max.startup.time.ms = 9223372036854775807 16:25:57 policy-pap | group.instance.id = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.236670002Z level=info msg="Executing migration" id="Add column tags to annotation table" 16:25:57 kafka | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-pap | heartbeat.interval.ms = 3000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.240696772Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.02662ms 16:25:57 kafka | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.245589921Z level=info msg="Executing migration" id="Create annotation_tag table v2" 16:25:57 kafka | socket.listen.backlog.size = 50 16:25:57 policy-pap | internal.leave.group.on.close = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.246219337Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=632.186µs 16:25:57 kafka | socket.receive.buffer.bytes = 102400 16:25:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.250301357Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 16:25:57 kafka | socket.request.max.bytes = 104857600 16:25:57 policy-pap | isolation.level = read_uncommitted 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.251823323Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.519185ms 16:25:57 kafka | socket.send.buffer.bytes = 102400 16:25:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.254956714Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 16:25:57 kafka | ssl.cipher.suites = [] 16:25:57 policy-pap | max.partition.fetch.bytes = 1048576 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.256230026Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.271832ms 16:25:57 kafka | ssl.client.auth = none 16:25:57 policy-pap | max.poll.interval.ms = 300000 16:25:57 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.261378037Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 16:25:57 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-pap | max.poll.records = 500 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.278219673Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.836927ms 16:25:57 kafka | ssl.endpoint.identification.algorithm = https 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.281189593Z level=info msg="Executing migration" id="Create annotation_tag table v3" 16:25:57 kafka | ssl.engine.factory.class = null 16:25:57 policy-pap | metric.reporters = [] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.281676118Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=486.635µs 16:25:57 kafka | ssl.key.password = null 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.284530106Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 16:25:57 kafka | ssl.keymanager.algorithm = SunX509 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.285441855Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=909.679µs 16:25:57 kafka | ssl.keystore.certificate.chain = null 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.290687567Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 16:25:57 kafka | ssl.keystore.key = null 16:25:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.291006Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=319.093µs 16:25:57 kafka | ssl.keystore.location = null 16:25:57 policy-pap | receive.buffer.bytes = 65536 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 kafka | ssl.keystore.password = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.294750568Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | ssl.keystore.type = JKS 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.295303372Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=552.414µs 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 policy-db-migrator | 16:25:57 kafka | ssl.principal.mapping.rules = DEFAULT 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.298963629Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 16:25:57 kafka | ssl.protocol = TLSv1.3 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.299359593Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=403.364µs 16:25:57 kafka | ssl.provider = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.303601794Z level=info msg="Executing migration" id="Add created time to annotation table" 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 kafka | ssl.secure.random.implementation = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.307878747Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.276823ms 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 kafka | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.313052808Z level=info msg="Executing migration" id="Add updated time to annotation table" 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 kafka | ssl.truststore.certificates = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.317262799Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.209751ms 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 kafka | ssl.truststore.location = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.320156108Z level=info msg="Executing migration" id="Add index for created in annotation table" 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 kafka | ssl.truststore.password = null 16:25:57 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.321036917Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=880.809µs 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 kafka | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.32641605Z level=info msg="Executing migration" id="Add index for updated in annotation table" 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.327818735Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.405205ms 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 kafka | transaction.max.timeout.ms = 900000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.333099887Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 16:25:57 policy-pap | sasl.login.class = null 16:25:57 kafka | transaction.partition.verification.enable = true 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.33339944Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=301.403µs 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.336297189Z level=info msg="Executing migration" id="Add epoch_end column" 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 kafka | transaction.state.log.load.buffer.size = 5242880 16:25:57 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.339830483Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.530444ms 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 kafka | transaction.state.log.min.isr = 2 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.344788183Z level=info msg="Executing migration" id="Add index for epoch_end" 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 kafka | transaction.state.log.num.partitions = 50 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.345947714Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.16285ms 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 kafka | transaction.state.log.replication.factor = 3 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.34956009Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 kafka | transaction.state.log.segment.bytes = 104857600 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.349833732Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=276.263µs 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 kafka | transactional.id.expiration.ms = 604800000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.353517179Z level=info msg="Executing migration" id="Move region to single row" 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 kafka | unclean.leader.election.enable = false 16:25:57 policy-db-migrator | > upgrade 0570-toscadatatype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.354098074Z level=info msg="Migration successfully executed" id="Move region to single row" duration=584.945µs 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 kafka | unstable.api.versions.enable = false 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.36069476Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 kafka | zookeeper.clientCnxnSocket = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.361845521Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.156761ms 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 kafka | zookeeper.connect = zookeeper:2181 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.366475537Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 kafka | zookeeper.connection.timeout.ms = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.368059082Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.582425ms 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 kafka | zookeeper.max.in.flight.requests = 10 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.371429456Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 kafka | zookeeper.metadata.migration.enable = false 16:25:57 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.372606038Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.179592ms 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 kafka | zookeeper.session.timeout.ms = 18000 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.383425485Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 16:25:57 kafka | zookeeper.set.acl = false 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.385757848Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=2.328523ms 16:25:57 kafka | zookeeper.ssl.cipher.suites = null 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 kafka | zookeeper.ssl.client.enable = false 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.388419924Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 kafka | zookeeper.ssl.crl.enable = false 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.389631536Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.211422ms 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.39616144Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 16:25:57 kafka | zookeeper.ssl.enabled.protocols = null 16:25:57 policy-pap | security.providers = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.397452954Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.291593ms 16:25:57 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.403215761Z level=info msg="Executing migration" id="Increase tags column to length 4096" 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | session.timeout.ms = 45000 16:25:57 kafka | zookeeper.ssl.keystore.location = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.403323792Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=109.741µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 kafka | zookeeper.ssl.keystore.password = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.40827148Z level=info msg="Executing migration" id="create test_data table" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 kafka | zookeeper.ssl.keystore.type = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.409753145Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.478375ms 16:25:57 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.416031297Z level=info msg="Executing migration" id="create dashboard_version table v1" 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | zookeeper.ssl.ocsp.enable = false 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.416982847Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=952.39µs 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 16:25:57 kafka | zookeeper.ssl.protocol = TLSv1.2 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.424530131Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | zookeeper.ssl.truststore.location = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.425751714Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.221813ms 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 policy-db-migrator | 16:25:57 kafka | zookeeper.ssl.truststore.password = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.429315779Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 16:25:57 policy-pap | ssl.key.password = null 16:25:57 policy-db-migrator | 16:25:57 kafka | zookeeper.ssl.truststore.type = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.430745803Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.429434ms 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 16:25:57 kafka | (kafka.server.KafkaConfig) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.435331548Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:16,544] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.43551842Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=186.962µs 16:25:57 kafka | [2024-02-21 16:23:16,545] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.443032266Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 16:25:57 kafka | [2024-02-21 16:23:16,546] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.443418769Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=387.893µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:16,550] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.447048265Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:16,583] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 16:25:57 kafka | [2024-02-21 16:23:16,590] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.447099046Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=51.381µs 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:16,599] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.452366708Z level=info msg="Executing migration" id="create team table" 16:25:57 policy-pap | ssl.provider = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 kafka | [2024-02-21 16:23:16,601] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.453270817Z level=info msg="Migration successfully executed" id="create team table" duration=903.829µs 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:16,602] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.457293657Z level=info msg="Executing migration" id="add index team.org_id" 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:16,614] INFO Starting the log cleaner (kafka.log.LogCleaner) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.458497939Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.203932ms 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:16,667] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.463693041Z level=info msg="Executing migration" id="add unique index team_org_id_name" 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 policy-db-migrator | > upgrade 0630-toscanodetype.sql 16:25:57 kafka | [2024-02-21 16:23:16,729] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.464835242Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.142232ms 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:16,745] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.470250816Z level=info msg="Executing migration" id="Add column uid in team" 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 16:25:57 kafka | [2024-02-21 16:23:16,774] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.474160615Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.910059ms 16:25:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,131] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.478106504Z level=info msg="Executing migration" id="Update uid column values in team" 16:25:57 policy-pap | 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,154] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.478288616Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=182.432µs 16:25:57 policy-pap | [2024-02-21T16:23:54.487+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,155] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.482909961Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 16:25:57 policy-pap | [2024-02-21T16:23:54.487+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 16:25:57 kafka | [2024-02-21 16:23:17,164] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.484246465Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.335494ms 16:25:57 policy-pap | [2024-02-21T16:23:54.487+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532634487 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,169] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.489480627Z level=info msg="Executing migration" id="create team member table" 16:25:57 policy-pap | [2024-02-21T16:23:54.488+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Subscribed to topic(s): policy-pdp-pap 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 16:25:57 kafka | [2024-02-21 16:23:17,194] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.490651389Z level=info msg="Migration successfully executed" id="create team member table" duration=1.170532ms 16:25:57 policy-pap | [2024-02-21T16:23:54.488+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,195] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.495859151Z level=info msg="Executing migration" id="add index team_member.org_id" 16:25:57 policy-pap | [2024-02-21T16:23:54.488+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=655e2ddc-1862-416c-8663-3742d457d411, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1f1e15de 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,199] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.498269535Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=2.480365ms 16:25:57 policy-pap | [2024-02-21T16:23:54.488+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=655e2ddc-1862-416c-8663-3742d457d411, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,201] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.504103323Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 16:25:57 policy-pap | [2024-02-21T16:23:54.489+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:25:57 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 16:25:57 kafka | [2024-02-21 16:23:17,202] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.5048643Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=760.877µs 16:25:57 policy-pap | allow.auto.create.topics = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.50882797Z level=info msg="Executing migration" id="add index team_member.team_id" 16:25:57 policy-pap | auto.commit.interval.ms = 5000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 kafka | [2024-02-21 16:23:17,215] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.50983369Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.00747ms 16:25:57 policy-pap | auto.offset.reset = latest 16:25:57 kafka | [2024-02-21 16:23:17,216] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.513098283Z level=info msg="Executing migration" id="Add column email to team table" 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 kafka | [2024-02-21 16:23:17,239] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 16:25:57 policy-pap | check.crcs = true 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.519647987Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.548604ms 16:25:57 kafka | [2024-02-21 16:23:17,265] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708532597253,1708532597253,1,0,0,72057613696040961,258,0,27 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.524826478Z level=info msg="Executing migration" id="Add column external to team_member table" 16:25:57 kafka | (kafka.zk.KafkaZkClient) 16:25:57 policy-pap | client.id = consumer-policy-pap-4 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.53003939Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.213062ms 16:25:57 kafka | [2024-02-21 16:23:17,266] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 16:25:57 policy-db-migrator | > upgrade 0660-toscaparameter.sql 16:25:57 policy-pap | client.rack = 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.533499375Z level=info msg="Executing migration" id="Add column permission to team_member table" 16:25:57 kafka | [2024-02-21 16:23:17,318] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.53703296Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.532895ms 16:25:57 kafka | [2024-02-21 16:23:17,325] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 policy-pap | default.api.timeout.ms = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.540718337Z level=info msg="Executing migration" id="create dashboard acl table" 16:25:57 kafka | [2024-02-21 16:23:17,331] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 policy-pap | enable.auto.commit = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.541809817Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.0923ms 16:25:57 kafka | [2024-02-21 16:23:17,331] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | exclude.internal.topics = true 16:25:57 kafka | [2024-02-21 16:23:17,337] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.546616966Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 16:25:57 policy-pap | fetch.max.bytes = 52428800 16:25:57 kafka | [2024-02-21 16:23:17,345] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.547821787Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.203101ms 16:25:57 policy-pap | fetch.max.wait.ms = 500 16:25:57 policy-db-migrator | > upgrade 0670-toscapolicies.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.552337072Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 16:25:57 kafka | [2024-02-21 16:23:17,347] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 16:25:57 policy-pap | fetch.min.bytes = 1 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.554207101Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.872939ms 16:25:57 kafka | [2024-02-21 16:23:17,349] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 16:25:57 policy-pap | group.id = policy-pap 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.559472403Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 16:25:57 kafka | [2024-02-21 16:23:17,352] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 16:25:57 policy-pap | group.instance.id = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.560759395Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.286502ms 16:25:57 kafka | [2024-02-21 16:23:17,357] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 16:25:57 policy-pap | heartbeat.interval.ms = 3000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.566348511Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 16:25:57 kafka | [2024-02-21 16:23:17,364] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.567661265Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.313264ms 16:25:57 kafka | [2024-02-21 16:23:17,371] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 16:25:57 policy-pap | internal.leave.group.on.close = true 16:25:57 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.572476492Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 16:25:57 kafka | [2024-02-21 16:23:17,371] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 16:25:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.57421164Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.729348ms 16:25:57 kafka | [2024-02-21 16:23:17,382] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 16:25:57 policy-pap | isolation.level = read_uncommitted 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 kafka | [2024-02-21 16:23:17,383] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 16:25:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.578533223Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 16:25:57 kafka | [2024-02-21 16:23:17,389] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 16:25:57 policy-pap | max.partition.fetch.bytes = 1048576 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.58021974Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.687437ms 16:25:57 kafka | [2024-02-21 16:23:17,395] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 16:25:57 policy-pap | max.poll.interval.ms = 300000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.584789085Z level=info msg="Executing migration" id="add index dashboard_permission" 16:25:57 kafka | [2024-02-21 16:23:17,399] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 16:25:57 policy-pap | max.poll.records = 500 16:25:57 policy-db-migrator | > upgrade 0690-toscapolicy.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.585853006Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.063991ms 16:25:57 kafka | [2024-02-21 16:23:17,413] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.590081317Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 16:25:57 kafka | [2024-02-21 16:23:17,419] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 16:25:57 policy-pap | metric.reporters = [] 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.590751624Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=669.837µs 16:25:57 kafka | [2024-02-21 16:23:17,424] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.595501231Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 16:25:57 kafka | [2024-02-21 16:23:17,430] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.595713934Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=211.933µs 16:25:57 kafka | [2024-02-21 16:23:17,442] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.600646322Z level=info msg="Executing migration" id="create tag table" 16:25:57 kafka | [2024-02-21 16:23:17,444] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 16:25:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:25:57 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.601760464Z level=info msg="Migration successfully executed" id="create tag table" duration=1.113132ms 16:25:57 policy-pap | receive.buffer.bytes = 65536 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,444] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.607619162Z level=info msg="Executing migration" id="add index tag.key_value" 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 16:25:57 kafka | [2024-02-21 16:23:17,445] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.609139427Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.481654ms 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,445] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.612990995Z level=info msg="Executing migration" id="create login attempt table" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,445] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.613797894Z level=info msg="Migration successfully executed" id="create login attempt table" duration=808.709µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,449] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.618414369Z level=info msg="Executing migration" id="add index login_attempt.username" 16:25:57 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 16:25:57 kafka | [2024-02-21 16:23:17,449] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.619397729Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=982.22µs 16:25:57 kafka | [2024-02-21 16:23:17,449] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.623422619Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 16:25:57 kafka | [2024-02-21 16:23:17,450] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.624405959Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=983.84µs 16:25:57 kafka | [2024-02-21 16:23:17,451] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.62954964Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 16:25:57 kafka | [2024-02-21 16:23:17,455] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,460] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.649835102Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=20.285012ms 16:25:57 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 16:25:57 kafka | [2024-02-21 16:23:17,463] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.65465883Z level=info msg="Executing migration" id="create login_attempt v2" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,463] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.655580169Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=931.47µs 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 kafka | [2024-02-21 16:23:17,464] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 16:25:57 policy-pap | sasl.login.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.661348416Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,466] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.662355336Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.00674ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,467] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.666912342Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,469] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.667650648Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=737.996µs 16:25:57 policy-db-migrator | > upgrade 0730-toscaproperty.sql 16:25:57 kafka | [2024-02-21 16:23:17,470] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.672017262Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,472] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.673548977Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.535115ms 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 kafka | [2024-02-21 16:23:17,473] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.677866741Z level=info msg="Executing migration" id="create user auth table" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:17,476] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.678702038Z level=info msg="Migration successfully executed" id="create user auth table" duration=836.787µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,478] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.683730259Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:17,479] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.685163303Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.432944ms 16:25:57 kafka | [2024-02-21 16:23:17,479] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.689891139Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 16:25:57 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 kafka | [2024-02-21 16:23:17,479] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.68996063Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=70.541µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 kafka | [2024-02-21 16:23:17,480] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.694730758Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 kafka | [2024-02-21 16:23:17,483] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.700553696Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.822308ms 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 kafka | [2024-02-21 16:23:17,484] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.706050371Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.711807328Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.756447ms 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.715778637Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 16:25:57 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.719772277Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.99481ms 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.724221141Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.732563754Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.321403ms 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 kafka | [2024-02-21 16:23:17,485] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.738424272Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 16:25:57 policy-pap | security.providers = null 16:25:57 kafka | [2024-02-21 16:23:17,489] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.739579864Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.140772ms 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 kafka | [2024-02-21 16:23:17,494] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.74318789Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 16:25:57 policy-pap | session.timeout.ms = 45000 16:25:57 kafka | [2024-02-21 16:23:17,494] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 16:25:57 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.749194369Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.006409ms 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 kafka | [2024-02-21 16:23:17,494] INFO Kafka startTimeMs: 1708532597490 (org.apache.kafka.common.utils.AppInfoParser) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.754109128Z level=info msg="Executing migration" id="create server_lock table" 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 kafka | [2024-02-21 16:23:17,495] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.755217849Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.109671ms 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 kafka | [2024-02-21 16:23:17,509] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.760930506Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 kafka | [2024-02-21 16:23:17,593] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.76236898Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.438644ms 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 kafka | [2024-02-21 16:23:17,663] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.766200539Z level=info msg="Executing migration" id="create user auth token table" 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 kafka | [2024-02-21 16:23:17,676] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 16:25:57 policy-db-migrator | > upgrade 0770-toscarequirement.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.768103197Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.902318ms 16:25:57 policy-pap | ssl.key.password = null 16:25:57 kafka | [2024-02-21 16:23:17,686] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.772797874Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 kafka | [2024-02-21 16:23:22,511] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.774058376Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.260742ms 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 16:25:57 kafka | [2024-02-21 16:23:22,512] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.78136443Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,028] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.782111396Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=747.796µs 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,032] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.78641569Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.787633321Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.217451ms 16:25:57 kafka | [2024-02-21 16:23:55,033] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 policy-db-migrator | > upgrade 0780-toscarequirements.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.791957415Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 16:25:57 kafka | [2024-02-21 16:23:55,045] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.802152246Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=10.194251ms 16:25:57 kafka | [2024-02-21 16:23:55,074] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(w964DUBQRBenrkUFkcM3Zw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 16:25:57 policy-pap | ssl.provider = null 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.807776912Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 16:25:57 kafka | [2024-02-21 16:23:55,075] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.80860775Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=830.788µs 16:25:57 kafka | [2024-02-21 16:23:55,076] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.812632151Z level=info msg="Executing migration" id="create cache_data table" 16:25:57 kafka | [2024-02-21 16:23:55,077] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.814168986Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.534684ms 16:25:57 kafka | [2024-02-21 16:23:55,080] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.818466438Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 16:25:57 kafka | [2024-02-21 16:23:55,080] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.819486578Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.01815ms 16:25:57 kafka | [2024-02-21 16:23:55,102] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.824848502Z level=info msg="Executing migration" id="create short_url table v1" 16:25:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,106] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 16:25:57 policy-pap | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.825454758Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=606.166µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,107] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.829453688Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,110] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.83069734Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.242042ms 16:25:57 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 16:25:57 kafka | [2024-02-21 16:23:55,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532634493 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.835420536Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,111] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.835554008Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=134.522µs 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 16:25:57 kafka | [2024-02-21 16:23:55,117] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|ServiceManager|main] Policy PAP starting topics 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.84080471Z level=info msg="Executing migration" id="delete alert_definition table" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,118] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.841180314Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=376.224µs 16:25:57 policy-pap | [2024-02-21T16:23:54.493+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=655e2ddc-1862-416c-8663-3742d457d411, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.845900491Z level=info msg="Executing migration" id="recreate alert_definition table" 16:25:57 kafka | [2024-02-21 16:23:55,124] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(KYlK5kQpQoexS8xo1QgwvA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 16:25:57 policy-pap | [2024-02-21T16:23:54.494+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=66b9586c-d4bb-4933-993d-6431c832b08c, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.846722599Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=822.048µs 16:25:57 kafka | [2024-02-21 16:23:55,124] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 16:25:57 policy-pap | [2024-02-21T16:23:54.494+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b0ae57b3-1307-4849-aa26-4af200585dc7, alive=false, publisher=null]]: starting 16:25:57 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.851391826Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.512+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.852492866Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.10136ms 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | acks = -1 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.858266394Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.859463866Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.199602ms 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | batch.size = 16384 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.865319785Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.865436156Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=166.852µs 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | buffer.memory = 33554432 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.872215213Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 16:25:57 policy-db-migrator | > upgrade 0820-toscatrigger.sql 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.875191382Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.978669ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | client.id = producer-1 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.880437494Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | compression.type = none 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.881503505Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.062711ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.887325463Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | delivery.timeout.ms = 120000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.888738867Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.412964ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | enable.idempotence = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.893113661Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 16:25:57 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.894920248Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.790807ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.900373172Z level=info msg="Executing migration" id="Add column paused in alert_definition" 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | linger.ms = 0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.906437673Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.063831ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | max.block.ms = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.911391212Z level=info msg="Executing migration" id="drop alert_definition table" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | max.in.flight.requests.per.connection = 5 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.912377842Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=985.29µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | max.request.size = 1048576 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.917092299Z level=info msg="Executing migration" id="delete alert_definition_version table" 16:25:57 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.917348171Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=255.062µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metadata.max.idle.ms = 300000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.924557674Z level=info msg="Executing migration" id="recreate alert_definition_version table" 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metric.reporters = [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.925671504Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.150911ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.931048628Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.932093548Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.04507ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.936166568Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 16:25:57 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | partitioner.adaptive.partitioning.enable = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.939472721Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=3.304923ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | partitioner.availability.timeout.ms = 0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.945722454Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | partitioner.class = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.945853575Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=135.891µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | partitioner.ignore.keys = false 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.952192458Z level=info msg="Executing migration" id="drop alert_definition_version table" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | receive.buffer.bytes = 32768 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.954230078Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=2.04095ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.963554631Z level=info msg="Executing migration" id="create alert_instance table" 16:25:57 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.964685922Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.131031ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.969078396Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.970069415Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=990.469µs 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.974568101Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.975629121Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.06099ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.980263327Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.98659346Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.329652ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.992855712Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.994080695Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.224993ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:15.998972013Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.000609159Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.618696ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.00515195Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | retries = 2147483647 16:25:57 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.080215625Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=75.061205ms 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.086364804Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.124714706Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=38.350642ms 16:25:57 kafka | [2024-02-21 16:23:55,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.130799384Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.133140129Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=2.338525ms 16:25:57 kafka | [2024-02-21 16:23:55,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.140002713Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.140928628Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=925.965µs 16:25:57 kafka | [2024-02-21 16:23:55,127] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.150407318Z level=info msg="Executing migration" id="add current_reason column related to current_state" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.157466152Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.058844ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.161863001Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.class = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.173552325Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=11.686004ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.178523926Z level=info msg="Executing migration" id="create alert_rule table" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.179667733Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.147707ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.186383655Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.187603904Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.220259ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.196060857Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.197143063Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.082786ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.20136485Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.202503087Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.138147ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.211155002Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.211257592Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=106.26µs 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.217754204Z level=info msg="Executing migration" id="add column for to alert_rule" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.224277474Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.5247ms 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.227530635Z level=info msg="Executing migration" id="add column annotations to alert_rule" 16:25:57 kafka | [2024-02-21 16:23:55,128] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.234001976Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.471951ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.238275823Z level=info msg="Executing migration" id="add column labels to alert_rule" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.245323608Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.041215ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.251825609Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.252528633Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=701.134µs 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.25836765Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.260522814Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.160984ms 16:25:57 policy-pap | security.providers = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.26617908Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.273012333Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.838113ms 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.276924628Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.283279747Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.354589ms 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.29005359Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.292353645Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.302165ms 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.302605159Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.307116008Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.514449ms 16:25:57 policy-pap | ssl.key.password = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.310974063Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.317678205Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.705192ms 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.321059526Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.321126847Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=66.581µs 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.327219226Z level=info msg="Executing migration" id="create alert_rule_version table" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.328387892Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.167506ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.332762111Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.provider = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.334417671Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.65527ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.340031216Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.341274834Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.240168ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.348356629Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.348463939Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=112.04µs 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.352629066Z level=info msg="Executing migration" id="add column for to alert_rule_version" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.361577522Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.948856ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | transaction.timeout.ms = 60000 16:25:57 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.365724259Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-pap | transactional.id = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.371138653Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.413814ms 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.375789402Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 16:25:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.381978521Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.188969ms 16:25:57 policy-pap | 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.387128553Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 16:25:57 policy-pap | [2024-02-21T16:23:54.525+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 16:25:57 kafka | [2024-02-21 16:23:55,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:23:54.541+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 kafka | [2024-02-21 16:23:55,129] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.392751819Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.622146ms 16:25:57 policy-pap | [2024-02-21T16:23:54.541+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 kafka | [2024-02-21 16:23:55,164] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.397826401Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 16:25:57 policy-pap | [2024-02-21T16:23:54.541+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532634541 16:25:57 kafka | [2024-02-21 16:23:55,180] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.40396319Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.136449ms 16:25:57 policy-pap | [2024-02-21T16:23:54.542+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b0ae57b3-1307-4849-aa26-4af200585dc7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:25:57 kafka | [2024-02-21 16:23:55,181] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.413222349Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 16:25:57 policy-pap | [2024-02-21T16:23:54.542+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c8a860fc-59e1-4b63-bded-3d8abfc38ee3, alive=false, publisher=null]]: starting 16:25:57 kafka | [2024-02-21 16:23:55,279] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.41336962Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=148.041µs 16:25:57 policy-pap | [2024-02-21T16:23:54.542+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:25:57 kafka | [2024-02-21 16:23:55,291] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.416357529Z level=info msg="Executing migration" id=create_alert_configuration_table 16:25:57 policy-pap | acks = -1 16:25:57 kafka | [2024-02-21 16:23:55,294] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.417228794Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=871.015µs 16:25:57 policy-pap | auto.include.jmx.reporter = true 16:25:57 kafka | [2024-02-21 16:23:55,296] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.421534651Z level=info msg="Executing migration" id="Add column default in alert_configuration" 16:25:57 policy-pap | batch.size = 16384 16:25:57 kafka | [2024-02-21 16:23:55,301] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(w964DUBQRBenrkUFkcM3Zw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.427880172Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.344781ms 16:25:57 policy-pap | bootstrap.servers = [kafka:9092] 16:25:57 kafka | [2024-02-21 16:23:55,308] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.433116394Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 16:25:57 policy-pap | buffer.memory = 33554432 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.433210725Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=94.851µs 16:25:57 policy-pap | client.dns.lookup = use_all_dns_ips 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.438853751Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 16:25:57 policy-pap | client.id = producer-2 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.44828716Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.433939ms 16:25:57 policy-pap | compression.type = none 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.453527424Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 16:25:57 policy-pap | connections.max.idle.ms = 540000 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.455161624Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.63414ms 16:25:57 policy-pap | delivery.timeout.ms = 120000 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.460121385Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | enable.idempotence = true 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.466725167Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.603112ms 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | interceptor.classes = [] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.47202119Z level=info msg="Executing migration" id=create_ngalert_configuration_table 16:25:57 kafka | [2024-02-21 16:23:55,309] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 16:25:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.472674925Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=653.205µs 16:25:57 kafka | [2024-02-21 16:23:55,312] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | linger.ms = 0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.476891041Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 16:25:57 kafka | [2024-02-21 16:23:55,312] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 policy-pap | max.block.ms = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.478070379Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.180118ms 16:25:57 kafka | [2024-02-21 16:23:55,312] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | max.in.flight.requests.per.connection = 5 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.482555907Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 16:25:57 kafka | [2024-02-21 16:23:55,312] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | max.request.size = 1048576 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.489216459Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.661692ms 16:25:57 kafka | [2024-02-21 16:23:55,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | metadata.max.age.ms = 300000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.494117719Z level=info msg="Executing migration" id="create provenance_type table" 16:25:57 kafka | [2024-02-21 16:23:55,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.494830944Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=713.025µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | metadata.max.idle.ms = 300000 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 policy-pap | metric.reporters = [] 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.499443083Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | metrics.num.samples = 2 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.500987903Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.54141ms 16:25:57 policy-db-migrator | 16:25:57 policy-pap | metrics.recording.level = INFO 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.507173261Z level=info msg="Executing migration" id="create alert_image table" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | metrics.sample.window.ms = 30000 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.50845066Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.277079ms 16:25:57 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 16:25:57 policy-pap | partitioner.adaptive.partitioning.enable = true 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.516509531Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | partitioner.availability.timeout.ms = 0 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.518060381Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.55064ms 16:25:57 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 policy-pap | partitioner.class = null 16:25:57 kafka | [2024-02-21 16:23:55,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.52432046Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | partitioner.ignore.keys = false 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.524444491Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=125.171µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | receive.buffer.bytes = 32768 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.5289962Z level=info msg="Executing migration" id=create_alert_configuration_history_table 16:25:57 policy-pap | reconnect.backoff.max.ms = 1000 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.529936115Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=939.565µs 16:25:57 policy-pap | reconnect.backoff.ms = 50 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.534546505Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 16:25:57 policy-pap | request.timeout.ms = 30000 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.536308806Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.760761ms 16:25:57 policy-pap | retries = 2147483647 16:25:57 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | retry.backoff.ms = 100 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.540045739Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | sasl.client.callback.handler.class = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.540806013Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | sasl.jaas.config = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.544551507Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 16:25:57 kafka | [2024-02-21 16:23:55,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.545083891Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=532.474µs 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 16:25:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.549376168Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.kerberos.service.name = null 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.550546605Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.170387ms 16:25:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.55457468Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 16:25:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.561495315Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.919785ms 16:25:57 policy-pap | sasl.login.callback.handler.class = null 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.565015447Z level=info msg="Executing migration" id="create library_element table v1" 16:25:57 policy-pap | sasl.login.class = null 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.565982733Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=966.925µs 16:25:57 policy-pap | sasl.login.connect.timeout.ms = null 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0100-pdp.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.571444907Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 16:25:57 policy-pap | sasl.login.read.timeout.ms = null 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.573975244Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.533297ms 16:25:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.578286051Z level=info msg="Executing migration" id="create library_element_connection table v1" 16:25:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.580235643Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.991773ms 16:25:57 policy-pap | sasl.login.refresh.window.factor = 0.8 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.584127978Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 16:25:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.585247735Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.124117ms 16:25:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.59093557Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 16:25:57 policy-pap | sasl.login.retry.backoff.ms = 100 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.592092887Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.159716ms 16:25:57 policy-pap | sasl.mechanism = GSSAPI 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | sasl.oauthbearer.expected.audience = null 16:25:57 policy-pap | sasl.oauthbearer.expected.issuer = null 16:25:57 kafka | [2024-02-21 16:23:55,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.595452469Z level=info msg="Executing migration" id="increase max description length to 2048" 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.595525429Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=73.23µs 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:25:57 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.59883313Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.599046462Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=213.022µs 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.603283208Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.603676991Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=393.863µs 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.607868197Z level=info msg="Executing migration" id="create data_keys table" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 16:25:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.608874523Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.006086ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 16:25:57 policy-pap | security.protocol = PLAINTEXT 16:25:57 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.613349121Z level=info msg="Executing migration" id="create secrets table" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 16:25:57 policy-pap | security.providers = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.614431298Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.080197ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 16:25:57 policy-pap | send.buffer.bytes = 131072 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.618123392Z level=info msg="Executing migration" id="rename data_keys name column to id" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 16:25:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.666919379Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.797527ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 16:25:57 policy-pap | socket.connection.setup.timeout.ms = 10000 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.673784013Z level=info msg="Executing migration" id="add name column into data_keys" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 16:25:57 policy-pap | ssl.cipher.suites = null 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.685014514Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.232191ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 16:25:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:25:57 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.690239937Z level=info msg="Executing migration" id="copy data_keys id column values into name" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 16:25:57 policy-pap | ssl.endpoint.identification.algorithm = https 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.690358958Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=119.981µs 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 16:25:57 policy-pap | ssl.engine.factory.class = null 16:25:57 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.693614578Z level=info msg="Executing migration" id="rename data_keys name column to label" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 16:25:57 policy-pap | ssl.key.password = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.735430472Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=41.812264ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 16:25:57 policy-pap | ssl.keymanager.algorithm = SunX509 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.745987548Z level=info msg="Executing migration" id="rename data_keys id column back to name" 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 16:25:57 policy-pap | ssl.keystore.certificate.chain = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.795681672Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=49.691283ms 16:25:57 kafka | [2024-02-21 16:23:55,325] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 16:25:57 policy-pap | ssl.keystore.key = null 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.807506666Z level=info msg="Executing migration" id="create kv_store table v1" 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 16:25:57 policy-pap | ssl.keystore.location = null 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.808848465Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.348869ms 16:25:57 policy-pap | ssl.keystore.password = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.812103805Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 16:25:57 policy-pap | ssl.keystore.type = JKS 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.813266963Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.163028ms 16:25:57 policy-pap | ssl.protocol = TLSv1.3 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.818117773Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 16:25:57 policy-pap | ssl.provider = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.818459786Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=342.093µs 16:25:57 policy-pap | ssl.secure.random.implementation = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.821321214Z level=info msg="Executing migration" id="create permission table" 16:25:57 policy-pap | ssl.trustmanager.algorithm = PKIX 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.82226543Z level=info msg="Migration successfully executed" id="create permission table" duration=944.386µs 16:25:57 policy-pap | ssl.truststore.certificates = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.828306838Z level=info msg="Executing migration" id="add unique index permission.role_id" 16:25:57 policy-pap | ssl.truststore.location = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.829466485Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.162837ms 16:25:57 policy-pap | ssl.truststore.password = null 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.834761728Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.836780512Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.018874ms 16:25:57 policy-pap | ssl.truststore.type = JKS 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.840095372Z level=info msg="Executing migration" id="create role table" 16:25:57 policy-pap | transaction.timeout.ms = 60000 16:25:57 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.841468851Z level=info msg="Migration successfully executed" id="create role table" duration=1.373269ms 16:25:57 policy-pap | transactional.id = null 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.84454969Z level=info msg="Executing migration" id="add column display_name" 16:25:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.851873026Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.322706ms 16:25:57 policy-pap | 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.85725528Z level=info msg="Executing migration" id="add column group_name" 16:25:57 policy-pap | [2024-02-21T16:23:54.543+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 16:25:57 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 16:25:57 kafka | [2024-02-21 16:23:55,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.865552852Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.298842ms 16:25:57 policy-pap | [2024-02-21T16:23:54.546+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.869516148Z level=info msg="Executing migration" id="add index role.org_id" 16:25:57 policy-pap | [2024-02-21T16:23:54.546+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:25:57 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.870297413Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=781.265µs 16:25:57 policy-pap | [2024-02-21T16:23:54.546+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708532634546 16:25:57 policy-db-migrator | JOIN pdpstatistics b 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.873149691Z level=info msg="Executing migration" id="add unique index role_org_id_name" 16:25:57 policy-pap | [2024-02-21T16:23:54.546+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c8a860fc-59e1-4b63-bded-3d8abfc38ee3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:25:57 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.873931815Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=781.944µs 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.546+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 16:25:57 policy-db-migrator | SET a.id = b.id 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.876986285Z level=info msg="Executing migration" id="add index role_org_id_uid" 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.547+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.878123422Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.134317ms 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.549+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.883176054Z level=info msg="Executing migration" id="create team role table" 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.550+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.88401707Z level=info msg="Migration successfully executed" id="create team role table" duration=841.006µs 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.551+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 16:25:57 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.887443222Z level=info msg="Executing migration" id="add index team_role.org_id" 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.556+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.888664159Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.220627ms 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.556+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 16:25:57 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.894396895Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.558+00:00|INFO|TimerManager|Thread-9] timer manager update started 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.896529198Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.130933ms 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.559+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.900786535Z level=info msg="Executing migration" id="add index team_role.team_id" 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.561+00:00|INFO|ServiceManager|main] Policy PAP started 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.902506326Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.719181ms 16:25:57 kafka | [2024-02-21 16:23:55,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.562+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 16:25:57 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.905817047Z level=info msg="Executing migration" id="create user role table" 16:25:57 kafka | [2024-02-21 16:23:55,327] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:54.562+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.353 seconds (process running for 12.032) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.906573752Z level=info msg="Migration successfully executed" id="create user role table" duration=756.565µs 16:25:57 kafka | [2024-02-21 16:23:55,328] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.022+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.912253068Z level=info msg="Executing migration" id="add index user_role.org_id" 16:25:57 kafka | [2024-02-21 16:23:55,323] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.027+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.913812267Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.558539ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.027+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.917745502Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.028+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.919390522Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.64685ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.066+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.92367273Z level=info msg="Executing migration" id="add index user_role.user_id" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.067+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Cluster ID: uKz8K1qZQP67IEMis280Uw 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.924799487Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.126726ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:23:55.128+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.931897382Z level=info msg="Executing migration" id="create builtin role table" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 16:25:57 policy-pap | [2024-02-21T16:23:55.156+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.932900918Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.004126ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:23:55.157+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.940571416Z level=info msg="Executing migration" id="add index builtin_role.role_id" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:23:55.208+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.942211516Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.63693ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:23:55.257+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.949524902Z level=info msg="Executing migration" id="add index builtin_role.name" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0210-sequence.sql 16:25:57 policy-pap | [2024-02-21T16:23:55.319+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.950483468Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=962.766µs 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:23:55.380+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.955649861Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.961624799Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.971618ms 16:25:57 policy-pap | [2024-02-21T16:23:55.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.967429576Z level=info msg="Executing migration" id="add index builtin_role.org_id" 16:25:57 policy-pap | [2024-02-21T16:23:55.922+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.968388362Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=959.946µs 16:25:57 policy-pap | [2024-02-21T16:23:55.956+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.977308048Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 16:25:57 policy-pap | [2024-02-21T16:23:55.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] (Re-)joining group 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0220-sequence.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.978452705Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.144737ms 16:25:57 policy-pap | [2024-02-21T16:23:55.964+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.983376386Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:25:57 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.984458813Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.082477ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.987567493Z level=info msg="Executing migration" id="add unique index role.uid" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Request joining group due to: need to re-join with the given member-id: consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.98869954Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.131767ms 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.991801109Z level=info msg="Executing migration" id="create seed assignment table" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:55.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] (Re-)joining group 16:25:57 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.992606224Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=804.845µs 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:58.997+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4', protocol='range'} 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.997423664Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:16.998506012Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.083158ms 16:25:57 policy-pap | [2024-02-21T16:23:59.007+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4=Assignment(partitions=[policy-pdp-pap-0])} 16:25:57 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.005293303Z level=info msg="Executing migration" id="add column hidden to role table" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.008+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f', protocol='range'} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.01294946Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.656397ms 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.008+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Finished assignment for group at generation 1: {consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f=Assignment(partitions=[policy-pdp-pap-0])} 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.034+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f', protocol='range'} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.018498526Z level=info msg="Executing migration" id="permission kind migration" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4', protocol='range'} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.027249074Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.750327ms 16:25:57 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.032574107Z level=info msg="Executing migration" id="permission attribute migration" 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:23:59.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.040972191Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.397274ms 16:25:57 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 16:25:57 policy-pap | [2024-02-21T16:23:59.041+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Adding newly assigned partitions: policy-pdp-pap-0 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.046145152Z level=info msg="Executing migration" id="permission identifier migration" 16:25:57 policy-pap | [2024-02-21T16:23:59.041+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.05394154Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.798318ms 16:25:57 policy-pap | [2024-02-21T16:23:59.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Found no committed offset for partition policy-pdp-pap-0 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.059641507Z level=info msg="Executing migration" id="add permission identifier index" 16:25:57 policy-pap | [2024-02-21T16:23:59.059+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 16:25:57 policy-db-migrator | > upgrade 0120-toscatrigger.sql 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.060777409Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.135722ms 16:25:57 policy-pap | [2024-02-21T16:23:59.076+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3, groupId=66b9586c-d4bb-4933-993d-6431c832b08c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.063631688Z level=info msg="Executing migration" id="create query_history table v1" 16:25:57 policy-pap | [2024-02-21T16:23:59.076+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:25:57 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.064567107Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=935.429µs 16:25:57 policy-pap | [2024-02-21T16:24:04.470+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.070670788Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 16:25:57 policy-pap | [2024-02-21T16:24:04.470+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.072729918Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.06305ms 16:25:57 policy-pap | [2024-02-21T16:24:04.473+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.080340485Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 16:25:57 policy-pap | [2024-02-21T16:24:16.252+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.080472416Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=135.911µs 16:25:57 policy-pap | [] 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.086666728Z level=info msg="Executing migration" id="rbac disabled migrator" 16:25:57 policy-pap | [2024-02-21T16:24:16.253+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a37b5cb3-4e17-4754-961b-0ca37490e58f","timestampMs":1708532656218,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.086705628Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=52.69µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.090007551Z level=info msg="Executing migration" id="teams permissions migration" 16:25:57 policy-pap | [2024-02-21T16:24:16.253+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.090500296Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=494.405µs 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a37b5cb3-4e17-4754-961b-0ca37490e58f","timestampMs":1708532656218,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.093691709Z level=info msg="Executing migration" id="dashboard permissions" 16:25:57 policy-pap | [2024-02-21T16:24:16.263+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0140-toscaparameter.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.094355145Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=664.646µs 16:25:57 policy-pap | [2024-02-21T16:24:16.368+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.097782199Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 16:25:57 policy-pap | [2024-02-21T16:24:16.368+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting listener 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.098455556Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=673.577µs 16:25:57 policy-pap | [2024-02-21T16:24:16.368+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting timer 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.102583338Z level=info msg="Executing migration" id="drop managed folder create actions" 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.10278937Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=206.362µs 16:25:57 policy-pap | [2024-02-21T16:24:16.369+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3, expireMs=1708532686369] 16:25:57 kafka | [2024-02-21 16:23:55,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.106055582Z level=info msg="Executing migration" id="alerting notification permissions" 16:25:57 policy-pap | [2024-02-21T16:24:16.371+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting enqueue 16:25:57 kafka | [2024-02-21 16:23:55,331] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0150-toscaproperty.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.106386075Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=330.703µs 16:25:57 policy-pap | [2024-02-21T16:24:16.372+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate started 16:25:57 kafka | [2024-02-21 16:23:55,339] INFO [Broker id=1] Finished LeaderAndIsr request in 224ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.109464326Z level=info msg="Executing migration" id="create query_history_star table v1" 16:25:57 policy-pap | [2024-02-21T16:24:16.371+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3, expireMs=1708532686369] 16:25:57 kafka | [2024-02-21 16:23:55,355] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=w964DUBQRBenrkUFkcM3Zw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:25:57 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.110336275Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=871.579µs 16:25:57 policy-pap | [2024-02-21T16:24:16.374+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:25:57 kafka | [2024-02-21 16:23:55,366] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.11482744Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 kafka | [2024-02-21 16:23:55,368] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.116113863Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.284703ms 16:25:57 policy-pap | [2024-02-21T16:24:16.410+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 kafka | [2024-02-21 16:23:55,372] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.121245204Z level=info msg="Executing migration" id="add column org_id in query_history_star" 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 kafka | [2024-02-21 16:23:55,375] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 16:25:57 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.12979846Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.548906ms 16:25:57 policy-pap | [2024-02-21T16:24:16.411+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.134201534Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:24:16.413+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.134288485Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=88.251µs 16:25:57 policy-db-migrator | 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.137424986Z level=info msg="Executing migration" id="create correlation table v1" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:24:16.414+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.138519878Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.094161ms 16:25:57 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 16:25:57 policy-pap | [2024-02-21T16:24:16.447+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.142509587Z level=info msg="Executing migration" id="add index correlations.uid" 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fbba62bc-ce73-4500-9710-efcf835e3651","timestampMs":1708532656423,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.14379023Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.282263ms 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:24:16.461+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.149330576Z level=info msg="Executing migration" id="add index correlations.source_uid" 16:25:57 policy-db-migrator | 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fbba62bc-ce73-4500-9710-efcf835e3651","timestampMs":1708532656423,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup"} 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.150235815Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=907.569µs 16:25:57 policy-pap | [2024-02-21T16:24:16.461+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.153607559Z level=info msg="Executing migration" id="add correlation config column" 16:25:57 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.468+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.164882121Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.269422ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,375] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0c59c656-c7d8-4254-9f3e-eca6fc98c068","timestampMs":1708532656424,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.169075863Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 16:25:57 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.482+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.170002013Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=927.99µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping enqueue 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.172899672Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping timer 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.174072203Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.172651ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.483+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3, expireMs=1708532686369] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.178979032Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 16:25:57 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping listener 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.210801621Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=31.819099ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.485+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopped 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.216395067Z level=info msg="Executing migration" id="create correlation v2" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.487+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.218023063Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.629197ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0c59c656-c7d8-4254-9f3e-eca6fc98c068","timestampMs":1708532656424,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.223238766Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 16:25:57 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.487+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.224950153Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.721827ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.491+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate successful 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.230024084Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.491+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc start publishing next request 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.230989864Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=966.88µs 16:25:57 policy-pap | [2024-02-21T16:24:16.491+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange starting 16:25:57 kafka | [2024-02-21 16:23:55,376] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.234464579Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 16:25:57 policy-pap | [2024-02-21T16:24:16.491+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange starting listener 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.235794201Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.325562ms 16:25:57 policy-pap | [2024-02-21T16:24:16.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange starting timer 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.243414167Z level=info msg="Executing migration" id="copy correlation v1 to v2" 16:25:57 policy-pap | [2024-02-21T16:24:16.492+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c6d9c8ae-9f96-4cb8-97e7-9591942eb564, expireMs=1708532686492] 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.243834772Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=424.035µs 16:25:57 policy-pap | [2024-02-21T16:24:16.492+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=c6d9c8ae-9f96-4cb8-97e7-9591942eb564, expireMs=1708532686492] 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.247491249Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 16:25:57 policy-pap | [2024-02-21T16:24:16.492+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange starting enqueue 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.248665241Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.173792ms 16:25:57 policy-pap | [2024-02-21T16:24:16.493+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.253126635Z level=info msg="Executing migration" id="add provisioning column" 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.261665251Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.538076ms 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange started 16:25:57 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.26564433Z level=info msg="Executing migration" id="create entity_events table" 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.507+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.266218626Z level=info msg="Migration successfully executed" id="create entity_events table" duration=577.316µs 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.269397108Z level=info msg="Executing migration" id="create dashboard public config v1" 16:25:57 kafka | [2024-02-21 16:23:55,377] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.507+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.270262187Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=865.129µs 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.518+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 policy-db-migrator | > upgrade 0100-upgrade.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.2735402Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d9af465b-121e-4aca-a001-a758b96d7663","timestampMs":1708532656506,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.273980114Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.519+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c6d9c8ae-9f96-4cb8-97e7-9591942eb564 16:25:57 policy-db-migrator | select 'upgrade to 1100 completed' as msg 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.277716251Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.536+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.278158876Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","timestampMs":1708532656344,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.280539949Z level=info msg="Executing migration" id="Drop old dashboard public config table" 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.536+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 16:25:57 policy-db-migrator | msg 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.2816018Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.061351ms 16:25:57 kafka | [2024-02-21 16:23:55,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.538+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | upgrade to 1100 completed 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.284777232Z level=info msg="Executing migration" id="recreate dashboard public config v1" 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c6d9c8ae-9f96-4cb8-97e7-9591942eb564","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d9af465b-121e-4aca-a001-a758b96d7663","timestampMs":1708532656506,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.285770902Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=994.55µs 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.539+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange stopping 16:25:57 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.290017065Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 16:25:57 policy-pap | [2024-02-21T16:24:16.539+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange stopping enqueue 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.291189916Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.173871ms 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.539+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange stopping timer 16:25:57 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.294518499Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.295741952Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.224193ms 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.539+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c6d9c8ae-9f96-4cb8-97e7-9591942eb564, expireMs=1708532686492] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.299424898Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange stopping listener 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.301781503Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.356415ms 16:25:57 kafka | [2024-02-21 16:23:55,381] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange stopped 16:25:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.306117196Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 16:25:57 kafka | [2024-02-21 16:23:55,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpStateChange successful 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.307182177Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.065071ms 16:25:57 kafka | [2024-02-21 16:23:55,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc start publishing next request 16:25:57 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.312865534Z level=info msg="Executing migration" id="Drop public config table" 16:25:57 kafka | [2024-02-21 16:23:55,382] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.313460649Z level=info msg="Migration successfully executed" id="Drop public config table" duration=596.565µs 16:25:57 kafka | [2024-02-21 16:23:55,383] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.540+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting listener 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.317596251Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.541+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting timer 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.318353589Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=757.638µs 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.541+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=1fab6bfd-d361-42cb-9055-21dc8bc0ac21, expireMs=1708532686541] 16:25:57 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.322082725Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.541+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate starting enqueue 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.323352848Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.269563ms 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.541+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.32849043Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","timestampMs":1708532656527,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.330018385Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.528505ms 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.542+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate started 16:25:57 policy-db-migrator | > upgrade 0120-audit_sequence.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.334344468Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.550+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.335372779Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.029191ms 16:25:57 kafka | [2024-02-21 16:23:55,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","timestampMs":1708532656527,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.33857064Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.551+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.378167917Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=39.591397ms 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 16:25:57 policy-pap | {"source":"pap-c468fa43-447b-4ce3-b7f9-e0c2bed1c584","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","timestampMs":1708532656527,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.383169308Z level=info msg="Executing migration" id="add annotations_enabled column" 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.551+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.392296168Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.1231ms 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.550+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 16:25:57 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.397730333Z level=info msg="Executing migration" id="add time_selection_enabled column" 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.407105047Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.375454ms 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.560+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:25:57 policy-db-migrator | -------------- 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.410898595Z level=info msg="Executing migration" id="delete orphaned public dashboards" 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"231be759-d5b8-4dae-bd16-94f291e82d6d","timestampMs":1708532656552,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.411118648Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=219.933µs 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.560+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:25:57 policy-db-migrator | 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.414176007Z level=info msg="Executing migration" id="add share column" 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 16:25:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"1fab6bfd-d361-42cb-9055-21dc8bc0ac21","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"231be759-d5b8-4dae-bd16-94f291e82d6d","timestampMs":1708532656552,"name":"apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:25:57 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.422515301Z level=info msg="Migration successfully executed" id="add share column" duration=8.337764ms 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.561+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1fab6bfd-d361-42cb-9055-21dc8bc0ac21 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.428007746Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 16:25:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:25:57 policy-pap | [2024-02-21T16:24:16.561+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.428299369Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=291.843µs 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:24:16.561+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping enqueue 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.433599652Z level=info msg="Executing migration" id="create file table" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.434621942Z level=info msg="Migration successfully executed" id="create file table" duration=1.02647ms 16:25:57 policy-db-migrator | -------------- 16:25:57 policy-pap | [2024-02-21T16:24:16.562+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping timer 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.437890845Z level=info msg="Executing migration" id="file table idx: path natural pk" 16:25:57 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.562+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=1fab6bfd-d361-42cb-9055-21dc8bc0ac21, expireMs=1708532686541] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.439182499Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.291374ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.562+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopping listener 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.444047827Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.562+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate stopped 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.445493351Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.445024ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.567+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc PdpUpdate successful 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.448899895Z level=info msg="Executing migration" id="create file_meta table" 16:25:57 policy-db-migrator | TRUNCATE TABLE sequence 16:25:57 kafka | [2024-02-21 16:23:55,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:16.568+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-ae39a0b7-8e72-45eb-a1aa-d0752cfdffdc has no more requests 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.450138288Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.238493ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:25.121+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.454103817Z level=info msg="Executing migration" id="file table idx: path key" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:25.131+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.455362851Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.261573ms 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:25.633+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.461535253Z level=info msg="Executing migration" id="set path collation in file table" 16:25:57 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:26.200+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.461620292Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=87.05µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:26.201+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.464853735Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 16:25:57 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 16:25:57 kafka | [2024-02-21 16:23:55,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:26.757+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.464925846Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=73.311µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,402] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:26.987+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.467716044Z level=info msg="Executing migration" id="managed permissions migration" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,402] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.105+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.46835645Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=640.666µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,402] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.105+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.473438151Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 16:25:57 policy-db-migrator | DROP TABLE pdpstatistics 16:25:57 kafka | [2024-02-21 16:23:55,402] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.105+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.473597713Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=159.532µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.118+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-21T16:24:26Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-21T16:24:27Z, user=policyadmin)] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.475632993Z level=info msg="Executing migration" id="RBAC action name migrator" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.863+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.476224919Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=591.506µs 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.864+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.478913546Z level=info msg="Executing migration" id="Add UID column to playlist" 16:25:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.864+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.485528872Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.615826ms 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.864+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.488343011Z level=info msg="Executing migration" id="Update uid column values in playlist" 16:25:57 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.865+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.488466282Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=123.321µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:27.879+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-21T16:24:27Z, user=policyadmin)] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.49329458Z level=info msg="Executing migration" id="Add index for uid in playlist" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.494478232Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.183572ms 16:25:57 policy-db-migrator | 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.498792645Z level=info msg="Executing migration" id="update group index for alert rules" 16:25:57 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.49932152Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=527.665µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.506629513Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 16:25:57 policy-db-migrator | DROP TABLE statistics_sequence 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.506991977Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=364.074µs 16:25:57 policy-db-migrator | -------------- 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.234+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.512243509Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 16:25:57 policy-db-migrator | 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:28.244+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-21T16:24:28Z, user=policyadmin)] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.512878866Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=641.987µs 16:25:57 policy-db-migrator | policyadmin: OK: upgrade (1300) 16:25:57 kafka | [2024-02-21 16:23:55,403] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:46.370+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f5f1c365-585c-4d0b-8cb8-9cf6cb4aaae3, expireMs=1708532686369] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.516669484Z level=info msg="Executing migration" id="add action column to seed_assignment" 16:25:57 policy-db-migrator | name version 16:25:57 kafka | [2024-02-21 16:23:55,405] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 16:25:57 policy-pap | [2024-02-21T16:24:46.492+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c6d9c8ae-9f96-4cb8-97e7-9591942eb564, expireMs=1708532686492] 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.525313261Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.646397ms 16:25:57 policy-db-migrator | policyadmin 1300 16:25:57 kafka | [2024-02-21 16:23:55,405] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 16:25:57 policy-pap | [2024-02-21T16:24:48.813+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.528803415Z level=info msg="Executing migration" id="add scope column to seed_assignment" 16:25:57 policy-db-migrator | ID script operation from_version to_version tag success atTime 16:25:57 policy-pap | [2024-02-21T16:24:48.816+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.536797236Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.992531ms 16:25:57 kafka | [2024-02-21 16:23:55,422] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 policy-pap | [2024-02-21T16:25:54.560+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.540187509Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 16:25:57 kafka | [2024-02-21 16:23:55,426] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.54122975Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.042721ms 16:25:57 kafka | [2024-02-21 16:23:55,427] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.545491953Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 16:25:57 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,427] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.680399953Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=134.90265ms 16:25:57 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,427] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.684333103Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 16:25:57 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,449] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.685209342Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=873.969µs 16:25:57 kafka | [2024-02-21 16:23:55,450] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,450] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.689537305Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 16:25:57 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,450] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.690415714Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=877.809µs 16:25:57 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,451] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,462] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.694672836Z level=info msg="Executing migration" id="add primary key to seed_assigment" 16:25:57 kafka | [2024-02-21 16:23:55,465] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.727881479Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.203343ms 16:25:57 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,465] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.731695327Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 16:25:57 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:22 16:25:57 kafka | [2024-02-21 16:23:55,465] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.731851589Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=156.152µs 16:25:57 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,465] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,477] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,478] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,478] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.737366334Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 16:25:57 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,478] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,478] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.737519466Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=153.582µs 16:25:57 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,492] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.740265703Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 16:25:57 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,493] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.740603727Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=338.604µs 16:25:57 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,493] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.744373574Z level=info msg="Executing migration" id="create folder table" 16:25:57 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,493] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.74592041Z level=info msg="Migration successfully executed" id="create folder table" duration=1.547856ms 16:25:57 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,493] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.750287603Z level=info msg="Executing migration" id="Add index for parent_uid" 16:25:57 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,506] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.751518616Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.230963ms 16:25:57 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,508] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.755728748Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 16:25:57 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,508] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.757002161Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.272873ms 16:25:57 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,509] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.760071842Z level=info msg="Executing migration" id="Update folder title length" 16:25:57 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,509] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.760100452Z level=info msg="Migration successfully executed" id="Update folder title length" duration=29.35µs 16:25:57 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,519] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.763213783Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 16:25:57 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,520] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.764329574Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.115251ms 16:25:57 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:23 16:25:57 kafka | [2024-02-21 16:23:55,520] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.768412686Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 16:25:57 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,520] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.769757919Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.344443ms 16:25:57 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,520] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.773663838Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 16:25:57 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.775641608Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.97852ms 16:25:57 kafka | [2024-02-21 16:23:55,528] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.779343065Z level=info msg="Executing migration" id="Sync dashboard and folder table" 16:25:57 kafka | [2024-02-21 16:23:55,528] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.77980422Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=461.525µs 16:25:57 kafka | [2024-02-21 16:23:55,528] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.784094412Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 16:25:57 kafka | [2024-02-21 16:23:55,528] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.784355025Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=260.593µs 16:25:57 kafka | [2024-02-21 16:23:55,528] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.787222164Z level=info msg="Executing migration" id="create anon_device table" 16:25:57 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,537] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.788100643Z level=info msg="Migration successfully executed" id="create anon_device table" duration=878.259µs 16:25:57 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,538] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.791320306Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 16:25:57 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,538] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.793438337Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.117451ms 16:25:57 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,538] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.797591468Z level=info msg="Executing migration" id="add index anon_device.updated_at" 16:25:57 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,538] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.798669599Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.077951ms 16:25:57 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,545] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.801915612Z level=info msg="Executing migration" id="create signing_key table" 16:25:57 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,545] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.80269014Z level=info msg="Migration successfully executed" id="create signing_key table" duration=775.188µs 16:25:57 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,545] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.806106394Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 16:25:57 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,546] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.807247385Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.140951ms 16:25:57 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,546] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.811120095Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 16:25:57 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,552] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.812232185Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.10952ms 16:25:57 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,553] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.815311676Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 16:25:57 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,553] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.815573679Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=262.703µs 16:25:57 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,553] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.818968283Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 16:25:57 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,553] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.828287187Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.318454ms 16:25:57 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,559] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.832475358Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 16:25:57 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:24 16:25:57 kafka | [2024-02-21 16:23:55,560] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.833017134Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=542.336µs 16:25:57 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,560] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.835813232Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 16:25:57 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,560] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.836673851Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=860.189µs 16:25:57 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,560] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.839505259Z level=info msg="Executing migration" id="create sso_setting table" 16:25:57 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,566] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.840515389Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.01011ms 16:25:57 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,566] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.845732501Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 16:25:57 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,566] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.846500859Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=769.578µs 16:25:57 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,567] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.84957271Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 16:25:57 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,567] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.849853913Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=282.283µs 16:25:57 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,572] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=migrator t=2024-02-21T16:23:17.852151286Z level=info msg="migrations completed" performed=526 skipped=0 duration=3.753718593s 16:25:57 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,573] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,573] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 16:25:57 grafana | logger=sqlstore t=2024-02-21T16:23:17.861632162Z level=info msg="Created default admin" user=admin 16:25:57 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,573] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=sqlstore t=2024-02-21T16:23:17.861907864Z level=info msg="Created default organization" 16:25:57 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,573] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=secrets t=2024-02-21T16:23:17.866206337Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 16:25:57 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,579] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=plugin.store t=2024-02-21T16:23:17.885705283Z level=info msg="Loading plugins..." 16:25:57 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,580] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=local.finder t=2024-02-21T16:23:17.937398672Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 16:25:57 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,580] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 16:25:57 grafana | logger=plugin.store t=2024-02-21T16:23:17.937553883Z level=info msg="Plugins loaded" count=55 duration=51.85123ms 16:25:57 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,580] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=query_data t=2024-02-21T16:23:17.940034289Z level=info msg="Query Service initialization" 16:25:57 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,580] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=live.push_http t=2024-02-21T16:23:17.943746245Z level=info msg="Live Push Gateway initialization" 16:25:57 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,588] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 grafana | logger=ngalert.migration t=2024-02-21T16:23:17.953502914Z level=info msg=Starting 16:25:57 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,589] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 grafana | logger=ngalert.migration orgID=1 t=2024-02-21T16:23:17.954319971Z level=info msg="Migrating alerts for organisation" 16:25:57 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,589] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 16:25:57 grafana | logger=ngalert.migration orgID=1 t=2024-02-21T16:23:17.955072709Z level=info msg="Alerts found to migrate" alerts=0 16:25:57 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,590] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-21T16:23:17.957276441Z level=info msg="Completed legacy migration" 16:25:57 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,590] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 grafana | logger=infra.usagestats.collector t=2024-02-21T16:23:17.987883749Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 16:25:57 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 kafka | [2024-02-21 16:23:55,603] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:25 16:25:57 grafana | logger=provisioning.datasources t=2024-02-21T16:23:17.99003851Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 16:25:57 kafka | [2024-02-21 16:23:55,605] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=provisioning.alerting t=2024-02-21T16:23:18.037279571Z level=info msg="starting to provision alerting" 16:25:57 kafka | [2024-02-21 16:23:55,605] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=provisioning.alerting t=2024-02-21T16:23:18.037304941Z level=info msg="finished to provision alerting" 16:25:57 kafka | [2024-02-21 16:23:55,605] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=grafanaStorageLogger t=2024-02-21T16:23:18.037819354Z level=info msg="Storage starting" 16:25:57 kafka | [2024-02-21 16:23:55,605] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=ngalert.state.manager t=2024-02-21T16:23:18.038320117Z level=info msg="Warming state cache for startup" 16:25:57 kafka | [2024-02-21 16:23:55,613] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-21T16:23:18.038907631Z level=info msg="Starting MultiOrg Alertmanager" 16:25:57 kafka | [2024-02-21 16:23:55,613] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=ngalert.state.manager t=2024-02-21T16:23:18.040383391Z level=info msg="State cache has been initialized" states=0 duration=2.062343ms 16:25:57 kafka | [2024-02-21 16:23:55,613] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=ngalert.scheduler t=2024-02-21T16:23:18.041527137Z level=info msg="Starting scheduler" tickInterval=10s 16:25:57 kafka | [2024-02-21 16:23:55,613] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=ticker t=2024-02-21T16:23:18.041729959Z level=info msg=starting first_tick=2024-02-21T16:23:20Z 16:25:57 kafka | [2024-02-21 16:23:55,614] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=grafana-apiserver t=2024-02-21T16:23:18.044408225Z level=info msg="Authentication is disabled" 16:25:57 kafka | [2024-02-21 16:23:55,619] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=grafana-apiserver t=2024-02-21T16:23:18.048118089Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 16:25:57 kafka | [2024-02-21 16:23:55,619] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=http.server t=2024-02-21T16:23:18.049772339Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 16:25:57 kafka | [2024-02-21 16:23:55,619] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=plugins.update.checker t=2024-02-21T16:23:18.132197081Z level=info msg="Update check succeeded" duration=94.205186ms 16:25:57 kafka | [2024-02-21 16:23:55,619] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=grafana.update.checker t=2024-02-21T16:23:18.179259602Z level=info msg="Update check succeeded" duration=141.73606ms 16:25:57 kafka | [2024-02-21 16:23:55,620] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 grafana | logger=infra.usagestats t=2024-02-21T16:23:54.046766933Z level=info msg="Usage stats are ready to report" 16:25:57 kafka | [2024-02-21 16:23:55,627] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 kafka | [2024-02-21 16:23:55,628] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2102241623220800u 1 2024-02-21 16:23:26 16:25:57 kafka | [2024-02-21 16:23:55,628] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:26 16:25:57 kafka | [2024-02-21 16:23:55,629] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:26 16:25:57 kafka | [2024-02-21 16:23:55,629] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,638] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,639] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,639] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,640] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,640] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,649] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,650] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,650] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,650] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,651] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2102241623220900u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,661] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,661] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,661] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,661] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,661] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,671] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,672] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,673] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,673] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:27 16:25:57 kafka | [2024-02-21 16:23:55,673] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2102241623221000u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,681] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2102241623221100u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,681] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2102241623221200u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,681] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2102241623221200u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,681] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2102241623221200u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,682] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2102241623221200u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,689] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2102241623221300u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,690] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2102241623221300u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,690] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2102241623221300u 1 2024-02-21 16:23:28 16:25:57 kafka | [2024-02-21 16:23:55,690] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 policy-db-migrator | policyadmin: OK @ 1300 16:25:57 kafka | [2024-02-21 16:23:55,690] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,699] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,700] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,700] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,700] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,700] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,706] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,707] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,707] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,707] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,707] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,712] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,712] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,713] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,713] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,713] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,718] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,719] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,719] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,719] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,719] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,726] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,726] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,726] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,726] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,727] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,735] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,736] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,736] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,736] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,736] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,744] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,744] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,745] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,745] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,745] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,752] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,753] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,753] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,753] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,753] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,760] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,761] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,761] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,761] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,761] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,766] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,766] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,766] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,766] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,767] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,773] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,774] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,774] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,774] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,774] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,782] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,782] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,782] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,782] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,782] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,790] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,791] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,791] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,791] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,791] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,796] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,797] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,797] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,797] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,797] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,807] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,807] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,808] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,808] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,808] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,815] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,815] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,816] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,816] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,816] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,826] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,827] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,827] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,827] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,827] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,833] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,834] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,834] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,834] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,835] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,843] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,845] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,845] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,845] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,845] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,852] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,853] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,853] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,853] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,853] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,860] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,860] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,861] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,861] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,861] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,868] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,869] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,869] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,869] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,869] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,875] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,876] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,876] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,876] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,876] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,882] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:25:57 kafka | [2024-02-21 16:23:55,882] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:25:57 kafka | [2024-02-21 16:23:55,882] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,883] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 16:25:57 kafka | [2024-02-21 16:23:55,883] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(KYlK5kQpQoexS8xo1QgwvA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,886] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,887] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,888] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,889] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,891] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,892] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,893] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,894] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,895] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,896] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,897] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,897] INFO [Broker id=1] Finished LeaderAndIsr request in 522ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,900] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=KYlK5kQpQoexS8xo1QgwvA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,904] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,905] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,905] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,905] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 16 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,908] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,909] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,912] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 19 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,912] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,912] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,913] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:25:57 kafka | [2024-02-21 16:23:55,913] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,913] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,913] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,914] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,914] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,914] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,914] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,915] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 22 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,915] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,915] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,915] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,916] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,916] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,917] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,928] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 34 milliseconds for epoch 0, of which 34 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 35 milliseconds for epoch 0, of which 34 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,929] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 34 milliseconds for epoch 0, of which 34 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 35 milliseconds for epoch 0, of which 34 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,930] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 35 milliseconds for epoch 0, of which 34 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,931] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,932] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,932] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,932] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 35 milliseconds for epoch 0, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:25:57 kafka | [2024-02-21 16:23:55,952] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,963] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 66b9586c-d4bb-4933-993d-6431c832b08c in Empty state. Created a new member id consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,980] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:55,981] INFO [GroupCoordinator 1]: Preparing to rebalance group 66b9586c-d4bb-4933-993d-6431c832b08c in state PreparingRebalance with old generation 0 (__consumer_offsets-40) (reason: Adding new member consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:56,564] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8aee6ac5-f217-4030-aeed-72326ff1d45e in Empty state. Created a new member id consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:56,568] INFO [GroupCoordinator 1]: Preparing to rebalance group 8aee6ac5-f217-4030-aeed-72326ff1d45e in state PreparingRebalance with old generation 0 (__consumer_offsets-1) (reason: Adding new member consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:58,994] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:59,006] INFO [GroupCoordinator 1]: Stabilized group 66b9586c-d4bb-4933-993d-6431c832b08c generation 1 (__consumer_offsets-40) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:59,018] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-acf8e027-b4f1-4dd8-99c9-6ad7d643cff4 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:59,018] INFO [GroupCoordinator 1]: Assignment received from leader consumer-66b9586c-d4bb-4933-993d-6431c832b08c-3-bc875603-de5e-4ef0-a728-f230d256911f for group 66b9586c-d4bb-4933-993d-6431c832b08c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:59,569] INFO [GroupCoordinator 1]: Stabilized group 8aee6ac5-f217-4030-aeed-72326ff1d45e generation 1 (__consumer_offsets-1) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:25:57 kafka | [2024-02-21 16:23:59,586] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8aee6ac5-f217-4030-aeed-72326ff1d45e-2-13007dcf-09ef-43a6-8830-f4beea4e56b6 for group 8aee6ac5-f217-4030-aeed-72326ff1d45e for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:25:57 ++ echo 'Tearing down containers...' 16:25:57 Tearing down containers... 16:25:57 ++ docker-compose down -v --remove-orphans 16:25:57 Stopping policy-apex-pdp ... 16:25:57 Stopping policy-pap ... 16:25:57 Stopping policy-api ... 16:25:57 Stopping grafana ... 16:25:57 Stopping kafka ... 16:25:57 Stopping compose_zookeeper_1 ... 16:25:57 Stopping simulator ... 16:25:57 Stopping prometheus ... 16:25:57 Stopping mariadb ... 16:25:58 Stopping grafana ... done 16:25:58 Stopping prometheus ... done 16:26:08 Stopping policy-apex-pdp ... done 16:26:18 Stopping simulator ... done 16:26:18 Stopping policy-pap ... done 16:26:19 Stopping mariadb ... done 16:26:19 Stopping kafka ... done 16:26:19 Stopping compose_zookeeper_1 ... done 16:26:28 Stopping policy-api ... done 16:26:28 Removing policy-apex-pdp ... 16:26:28 Removing policy-pap ... 16:26:28 Removing policy-api ... 16:26:28 Removing policy-db-migrator ... 16:26:28 Removing grafana ... 16:26:28 Removing kafka ... 16:26:28 Removing compose_zookeeper_1 ... 16:26:28 Removing simulator ... 16:26:28 Removing prometheus ... 16:26:28 Removing mariadb ... 16:26:28 Removing policy-apex-pdp ... done 16:26:28 Removing simulator ... done 16:26:28 Removing grafana ... done 16:26:28 Removing policy-api ... done 16:26:28 Removing kafka ... done 16:26:28 Removing policy-db-migrator ... done 16:26:28 Removing policy-pap ... done 16:26:28 Removing mariadb ... done 16:26:28 Removing prometheus ... done 16:26:28 Removing compose_zookeeper_1 ... done 16:26:28 Removing network compose_default 16:26:29 ++ cd /w/workspace/policy-pap-master-project-csit-pap 16:26:29 + load_set 16:26:29 + _setopts=hxB 16:26:29 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:26:29 ++ tr : ' ' 16:26:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:26:29 + set +o braceexpand 16:26:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:26:29 + set +o hashall 16:26:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:26:29 + set +o interactive-comments 16:26:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:26:29 + set +o xtrace 16:26:29 ++ echo hxB 16:26:29 ++ sed 's/./& /g' 16:26:29 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:26:29 + set +h 16:26:29 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:26:29 + set +x 16:26:29 + [[ -n /tmp/tmp.aKPhtjj3Wq ]] 16:26:29 + rsync -av /tmp/tmp.aKPhtjj3Wq/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 16:26:29 sending incremental file list 16:26:29 ./ 16:26:29 log.html 16:26:29 output.xml 16:26:29 report.html 16:26:29 testplan.txt 16:26:29 16:26:29 sent 918,497 bytes received 95 bytes 1,837,184.00 bytes/sec 16:26:29 total size is 917,952 speedup is 1.00 16:26:29 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 16:26:29 + exit 1 16:26:29 Build step 'Execute shell' marked build as failure 16:26:29 $ ssh-agent -k 16:26:29 unset SSH_AUTH_SOCK; 16:26:29 unset SSH_AGENT_PID; 16:26:29 echo Agent pid 2076 killed; 16:26:29 [ssh-agent] Stopped. 16:26:29 Robot results publisher started... 16:26:29 INFO: Checking test criticality is deprecated and will be dropped in a future release! 16:26:29 -Parsing output xml: 16:26:29 Done! 16:26:29 WARNING! Could not find file: **/log.html 16:26:29 WARNING! Could not find file: **/report.html 16:26:29 -Copying log files to build dir: 16:26:29 Done! 16:26:29 -Assigning results to build: 16:26:29 Done! 16:26:29 -Checking thresholds: 16:26:29 Done! 16:26:29 Done publishing Robot results. 16:26:29 [PostBuildScript] - [INFO] Executing post build scripts. 16:26:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9843785924989029677.sh 16:26:29 ---> sysstat.sh 16:26:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7012069526599531790.sh 16:26:30 ---> package-listing.sh 16:26:30 ++ facter osfamily 16:26:30 ++ tr '[:upper:]' '[:lower:]' 16:26:30 + OS_FAMILY=debian 16:26:30 + workspace=/w/workspace/policy-pap-master-project-csit-pap 16:26:30 + START_PACKAGES=/tmp/packages_start.txt 16:26:30 + END_PACKAGES=/tmp/packages_end.txt 16:26:30 + DIFF_PACKAGES=/tmp/packages_diff.txt 16:26:30 + PACKAGES=/tmp/packages_start.txt 16:26:30 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 16:26:30 + PACKAGES=/tmp/packages_end.txt 16:26:30 + case "${OS_FAMILY}" in 16:26:30 + dpkg -l 16:26:30 + grep '^ii' 16:26:30 + '[' -f /tmp/packages_start.txt ']' 16:26:30 + '[' -f /tmp/packages_end.txt ']' 16:26:30 + diff /tmp/packages_start.txt /tmp/packages_end.txt 16:26:30 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 16:26:30 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 16:26:30 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 16:26:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1898419286042050042.sh 16:26:30 ---> capture-instance-metadata.sh 16:26:30 Setup pyenv: 16:26:30 system 16:26:30 3.8.13 16:26:30 3.9.13 16:26:30 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 16:26:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-X2wi from file:/tmp/.os_lf_venv 16:26:32 lf-activate-venv(): INFO: Installing: lftools 16:26:43 lf-activate-venv(): INFO: Adding /tmp/venv-X2wi/bin to PATH 16:26:43 INFO: Running in OpenStack, capturing instance metadata 16:26:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11532948466488422200.sh 16:26:43 provisioning config files... 16:26:43 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15280368578616699528tmp 16:26:43 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 16:26:43 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 16:26:43 [EnvInject] - Injecting environment variables from a build step. 16:26:43 [EnvInject] - Injecting as environment variables the properties content 16:26:43 SERVER_ID=logs 16:26:43 16:26:43 [EnvInject] - Variables injected successfully. 16:26:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3577833877164545558.sh 16:26:43 ---> create-netrc.sh 16:26:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2989949783689330846.sh 16:26:43 ---> python-tools-install.sh 16:26:43 Setup pyenv: 16:26:43 system 16:26:43 3.8.13 16:26:43 3.9.13 16:26:43 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 16:26:43 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-X2wi from file:/tmp/.os_lf_venv 16:26:45 lf-activate-venv(): INFO: Installing: lftools 16:26:53 lf-activate-venv(): INFO: Adding /tmp/venv-X2wi/bin to PATH 16:26:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13201885022381923055.sh 16:26:53 ---> sudo-logs.sh 16:26:53 Archiving 'sudo' log.. 16:26:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17390762751157190629.sh 16:26:53 ---> job-cost.sh 16:26:53 Setup pyenv: 16:26:53 system 16:26:53 3.8.13 16:26:53 3.9.13 16:26:53 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 16:26:53 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-X2wi from file:/tmp/.os_lf_venv 16:26:55 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 16:27:00 lf-activate-venv(): INFO: Adding /tmp/venv-X2wi/bin to PATH 16:27:00 INFO: No Stack... 16:27:01 INFO: Retrieving Pricing Info for: v3-standard-8 16:27:01 INFO: Archiving Costs 16:27:01 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4958289934230128184.sh 16:27:01 ---> logs-deploy.sh 16:27:01 Setup pyenv: 16:27:01 system 16:27:01 3.8.13 16:27:01 3.9.13 16:27:01 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 16:27:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-X2wi from file:/tmp/.os_lf_venv 16:27:03 lf-activate-venv(): INFO: Installing: lftools 16:27:12 lf-activate-venv(): INFO: Adding /tmp/venv-X2wi/bin to PATH 16:27:12 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1586 16:27:12 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 16:27:13 Archives upload complete. 16:27:13 INFO: archiving logs to Nexus 16:27:14 ---> uname -a: 16:27:14 Linux prd-ubuntu1804-docker-8c-8g-7437 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 16:27:14 16:27:14 16:27:14 ---> lscpu: 16:27:14 Architecture: x86_64 16:27:14 CPU op-mode(s): 32-bit, 64-bit 16:27:14 Byte Order: Little Endian 16:27:14 CPU(s): 8 16:27:14 On-line CPU(s) list: 0-7 16:27:14 Thread(s) per core: 1 16:27:14 Core(s) per socket: 1 16:27:14 Socket(s): 8 16:27:14 NUMA node(s): 1 16:27:14 Vendor ID: AuthenticAMD 16:27:14 CPU family: 23 16:27:14 Model: 49 16:27:14 Model name: AMD EPYC-Rome Processor 16:27:14 Stepping: 0 16:27:14 CPU MHz: 2800.000 16:27:14 BogoMIPS: 5600.00 16:27:14 Virtualization: AMD-V 16:27:14 Hypervisor vendor: KVM 16:27:14 Virtualization type: full 16:27:14 L1d cache: 32K 16:27:14 L1i cache: 32K 16:27:14 L2 cache: 512K 16:27:14 L3 cache: 16384K 16:27:14 NUMA node0 CPU(s): 0-7 16:27:14 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 16:27:14 16:27:14 16:27:14 ---> nproc: 16:27:14 8 16:27:14 16:27:14 16:27:14 ---> df -h: 16:27:14 Filesystem Size Used Avail Use% Mounted on 16:27:14 udev 16G 0 16G 0% /dev 16:27:14 tmpfs 3.2G 708K 3.2G 1% /run 16:27:14 /dev/vda1 155G 14G 142G 9% / 16:27:14 tmpfs 16G 0 16G 0% /dev/shm 16:27:14 tmpfs 5.0M 0 5.0M 0% /run/lock 16:27:14 tmpfs 16G 0 16G 0% /sys/fs/cgroup 16:27:14 /dev/vda15 105M 4.4M 100M 5% /boot/efi 16:27:14 tmpfs 3.2G 0 3.2G 0% /run/user/1001 16:27:14 16:27:14 16:27:14 ---> free -m: 16:27:14 total used free shared buff/cache available 16:27:14 Mem: 32167 849 25110 0 6206 30861 16:27:14 Swap: 1023 0 1023 16:27:14 16:27:14 16:27:14 ---> ip addr: 16:27:14 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 16:27:14 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 16:27:14 inet 127.0.0.1/8 scope host lo 16:27:14 valid_lft forever preferred_lft forever 16:27:14 inet6 ::1/128 scope host 16:27:14 valid_lft forever preferred_lft forever 16:27:14 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 16:27:14 link/ether fa:16:3e:2b:9d:67 brd ff:ff:ff:ff:ff:ff 16:27:14 inet 10.30.106.94/23 brd 10.30.107.255 scope global dynamic ens3 16:27:14 valid_lft 85876sec preferred_lft 85876sec 16:27:14 inet6 fe80::f816:3eff:fe2b:9d67/64 scope link 16:27:14 valid_lft forever preferred_lft forever 16:27:14 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 16:27:14 link/ether 02:42:f3:1a:ac:64 brd ff:ff:ff:ff:ff:ff 16:27:14 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 16:27:14 valid_lft forever preferred_lft forever 16:27:14 16:27:14 16:27:14 ---> sar -b -r -n DEV: 16:27:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7437) 02/21/24 _x86_64_ (8 CPU) 16:27:14 16:27:14 16:18:33 LINUX RESTART (8 CPU) 16:27:14 16:27:14 16:19:01 tps rtps wtps bread/s bwrtn/s 16:27:14 16:20:01 130.56 51.06 79.50 2249.63 50968.04 16:27:14 16:21:01 107.20 13.71 93.49 1116.16 51024.99 16:27:14 16:22:01 127.58 9.57 118.02 1677.87 56238.13 16:27:14 16:23:01 159.42 0.08 159.33 4.80 98202.30 16:27:14 16:24:01 326.58 15.11 311.46 783.34 31239.38 16:27:14 16:25:01 6.80 0.07 6.73 3.07 172.17 16:27:14 16:26:01 11.00 0.03 10.96 3.87 1100.37 16:27:14 16:27:01 54.42 1.40 53.02 107.45 1821.73 16:27:14 Average: 115.45 11.38 104.07 743.20 36350.93 16:27:14 16:27:14 16:19:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 16:27:14 16:20:01 30168128 31724476 2771092 8.41 66204 1801084 1385256 4.08 842048 1638560 145348 16:27:14 16:21:01 29823428 31670548 3115792 9.46 82184 2062940 1449468 4.26 923180 1875384 204836 16:27:14 16:22:01 27168472 31635860 5770748 17.52 128676 4524400 1573580 4.63 1038984 4262500 2093804 16:27:14 16:23:01 25781984 31657420 7157236 21.73 140424 5861692 1456072 4.28 1028576 5598916 578168 16:27:14 16:24:01 23517412 29558196 9421808 28.60 156276 5991752 9020352 26.54 3309044 5507192 1308 16:27:14 16:25:01 23303052 29345132 9636168 29.25 156528 5992292 9082104 26.72 3523564 5503812 432 16:27:14 16:26:01 23342024 29410444 9597196 29.14 156852 6020348 8295988 24.41 3475852 5518076 308 16:27:14 16:27:01 25777200 31665064 7162020 21.74 159840 5852928 1440748 4.24 1251768 5365808 55348 16:27:14 Average: 26110212 30833392 6829008 20.73 130873 4763430 4212946 12.40 1924127 4408781 384944 16:27:14 16:27:14 16:19:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 16:27:14 16:20:01 lo 1.27 1.27 0.13 0.13 0.00 0.00 0.00 0.00 16:27:14 16:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:20:01 ens3 403.53 246.54 1499.69 75.00 0.00 0.00 0.00 0.00 16:27:14 16:21:01 lo 1.13 1.13 0.11 0.11 0.00 0.00 0.00 0.00 16:27:14 16:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:21:01 ens3 47.40 35.05 702.53 7.49 0.00 0.00 0.00 0.00 16:27:14 16:22:01 br-747168a8470e 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:22:01 lo 8.87 8.87 0.87 0.87 0.00 0.00 0.00 0.00 16:27:14 16:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:22:01 ens3 778.75 406.63 17482.24 31.71 0.00 0.00 0.00 0.00 16:27:14 16:23:01 br-747168a8470e 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:23:01 lo 4.13 4.13 0.39 0.39 0.00 0.00 0.00 0.00 16:27:14 16:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:23:01 ens3 456.34 229.88 14154.40 16.86 0.00 0.00 0.00 0.00 16:27:14 16:24:01 veth1bdbae0 0.50 0.85 0.05 0.31 0.00 0.00 0.00 0.00 16:27:14 16:24:01 br-747168a8470e 0.92 0.82 0.07 0.32 0.00 0.00 0.00 0.00 16:27:14 16:24:01 veth59ab104 0.00 0.40 0.00 0.02 0.00 0.00 0.00 0.00 16:27:14 16:24:01 lo 1.50 1.50 0.12 0.12 0.00 0.00 0.00 0.00 16:27:14 16:25:01 veth1bdbae0 0.23 0.17 0.02 0.01 0.00 0.00 0.00 0.00 16:27:14 16:25:01 br-747168a8470e 2.05 2.30 1.81 1.73 0.00 0.00 0.00 0.00 16:27:14 16:25:01 veth59ab104 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:25:01 lo 6.15 6.15 3.63 3.63 0.00 0.00 0.00 0.00 16:27:14 16:26:01 br-747168a8470e 1.22 1.50 0.10 0.14 0.00 0.00 0.00 0.00 16:27:14 16:26:01 veth59ab104 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:26:01 lo 7.75 7.75 0.59 0.59 0.00 0.00 0.00 0.00 16:27:14 16:26:01 veth0a8ccb1 107.42 129.71 77.69 31.84 0.00 0.00 0.00 0.01 16:27:14 16:27:01 lo 0.73 0.73 0.07 0.07 0.00 0.00 0.00 0.00 16:27:14 16:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 16:27:01 ens3 1765.44 983.40 34021.63 170.11 0.00 0.00 0.00 0.00 16:27:14 Average: lo 3.94 3.94 0.74 0.74 0.00 0.00 0.00 0.00 16:27:14 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:27:14 Average: ens3 219.19 121.74 4243.87 21.10 0.00 0.00 0.00 0.00 16:27:14 16:27:14 16:27:14 ---> sar -P ALL: 16:27:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7437) 02/21/24 _x86_64_ (8 CPU) 16:27:14 16:27:14 16:18:33 LINUX RESTART (8 CPU) 16:27:14 16:27:14 16:19:01 CPU %user %nice %system %iowait %steal %idle 16:27:14 16:20:01 all 10.29 0.00 0.97 6.56 0.04 82.14 16:27:14 16:20:01 0 18.89 0.00 1.82 7.38 0.10 71.81 16:27:14 16:20:01 1 8.89 0.00 0.52 1.89 0.03 88.67 16:27:14 16:20:01 2 11.46 0.00 0.64 7.97 0.03 79.89 16:27:14 16:20:01 3 8.61 0.00 0.65 0.39 0.02 90.33 16:27:14 16:20:01 4 6.97 0.00 1.07 1.90 0.03 90.02 16:27:14 16:20:01 5 11.05 0.00 1.02 1.84 0.03 86.05 16:27:14 16:20:01 6 13.99 0.00 1.16 1.80 0.03 83.02 16:27:14 16:20:01 7 2.45 0.00 0.82 29.30 0.03 67.39 16:27:14 16:21:01 all 8.70 0.00 0.55 6.96 0.03 83.76 16:27:14 16:21:01 0 5.81 0.00 0.80 28.53 0.05 64.81 16:27:14 16:21:01 1 16.76 0.00 1.10 4.95 0.05 77.13 16:27:14 16:21:01 2 1.05 0.00 0.13 0.02 0.02 98.78 16:27:14 16:21:01 3 5.40 0.00 0.54 16.24 0.02 77.81 16:27:14 16:21:01 4 18.95 0.00 0.60 3.51 0.03 76.90 16:27:14 16:21:01 5 13.18 0.00 0.48 1.89 0.03 84.42 16:27:14 16:21:01 6 6.20 0.00 0.53 0.27 0.02 92.98 16:27:14 16:21:01 7 2.34 0.00 0.25 0.18 0.00 97.23 16:27:14 16:22:01 all 11.25 0.00 3.88 9.42 0.07 75.38 16:27:14 16:22:01 0 8.55 0.00 3.60 3.68 0.05 84.12 16:27:14 16:22:01 1 10.82 0.00 4.34 0.12 0.07 84.66 16:27:14 16:22:01 2 8.90 0.00 3.59 44.32 0.05 43.14 16:27:14 16:22:01 3 16.72 0.00 3.67 4.56 0.08 74.97 16:27:14 16:22:01 4 15.08 0.00 4.16 7.78 0.14 72.85 16:27:14 16:22:01 5 13.27 0.00 3.42 7.53 0.08 75.70 16:27:14 16:22:01 6 6.16 0.00 4.09 4.61 0.07 85.07 16:27:14 16:22:01 7 10.51 0.00 4.20 2.68 0.07 82.54 16:27:14 16:23:01 all 5.29 0.00 2.45 12.84 0.04 79.38 16:27:14 16:23:01 0 4.47 0.00 2.00 2.91 0.02 90.61 16:27:14 16:23:01 1 4.04 0.00 2.83 0.19 0.03 92.91 16:27:14 16:23:01 2 5.28 0.00 2.71 23.58 0.05 68.38 16:27:14 16:23:01 3 4.80 0.00 2.75 7.41 0.02 85.02 16:27:14 16:23:01 4 5.56 0.00 1.85 0.18 0.02 92.39 16:27:14 16:23:01 5 6.35 0.00 3.28 54.45 0.05 35.87 16:27:14 16:23:01 6 5.90 0.00 2.56 10.37 0.05 81.12 16:27:14 16:23:01 7 5.96 0.00 1.63 3.82 0.05 88.54 16:27:14 16:24:01 all 26.67 0.00 3.60 3.29 0.09 66.35 16:27:14 16:24:01 0 30.05 0.00 4.09 3.25 0.10 62.51 16:27:14 16:24:01 1 27.89 0.00 3.69 0.30 0.08 68.04 16:27:14 16:24:01 2 29.86 0.00 3.69 1.95 0.08 64.42 16:27:14 16:24:01 3 29.31 0.00 3.91 0.67 0.08 66.02 16:27:14 16:24:01 4 21.33 0.00 3.31 1.93 0.08 73.34 16:27:14 16:24:01 5 32.14 0.00 4.35 2.32 0.08 61.12 16:27:14 16:24:01 6 18.55 0.00 2.77 12.50 0.07 66.10 16:27:14 16:24:01 7 24.24 0.00 2.96 3.41 0.07 69.32 16:27:14 16:25:01 all 6.86 0.00 0.66 0.03 0.04 92.41 16:27:14 16:25:01 0 7.86 0.00 0.85 0.00 0.03 91.25 16:27:14 16:25:01 1 4.87 0.00 0.35 0.00 0.03 94.74 16:27:14 16:25:01 2 6.30 0.00 0.48 0.08 0.05 93.08 16:27:14 16:25:01 3 9.25 0.00 0.94 0.12 0.03 89.66 16:27:14 16:25:01 4 5.70 0.00 0.48 0.00 0.03 93.78 16:27:14 16:25:01 5 9.26 0.00 1.00 0.02 0.05 89.67 16:27:14 16:25:01 6 5.12 0.00 0.59 0.00 0.07 94.22 16:27:14 16:25:01 7 6.54 0.00 0.58 0.00 0.03 92.85 16:27:14 16:26:01 all 1.61 0.00 0.35 0.09 0.03 97.92 16:27:14 16:26:01 0 1.47 0.00 0.38 0.00 0.05 98.10 16:27:14 16:26:01 1 1.67 0.00 0.33 0.08 0.02 97.90 16:27:14 16:26:01 2 1.14 0.00 0.38 0.00 0.02 98.46 16:27:14 16:26:01 3 0.87 0.00 0.25 0.22 0.02 98.65 16:27:14 16:26:01 4 1.45 0.00 0.37 0.00 0.03 98.15 16:27:14 16:26:01 5 2.07 0.00 0.40 0.02 0.03 97.48 16:27:14 16:26:01 6 2.08 0.00 0.37 0.42 0.05 97.09 16:27:14 16:26:01 7 2.16 0.00 0.28 0.02 0.03 97.51 16:27:14 16:27:01 all 6.49 0.00 0.63 0.27 0.03 92.59 16:27:14 16:27:01 0 9.66 0.00 0.60 0.08 0.02 89.64 16:27:14 16:27:01 1 2.92 0.00 0.47 0.10 0.02 96.50 16:27:14 16:27:01 2 3.42 0.00 0.63 0.12 0.02 95.81 16:27:14 16:27:01 3 13.64 0.00 0.68 1.39 0.05 84.24 16:27:14 16:27:01 4 0.83 0.00 0.57 0.15 0.02 98.43 16:27:14 16:27:01 5 18.05 0.00 1.10 0.17 0.03 80.65 16:27:14 16:27:01 6 0.63 0.00 0.48 0.08 0.02 98.78 16:27:14 16:27:01 7 2.72 0.00 0.50 0.08 0.02 96.68 16:27:14 Average: all 9.63 0.00 1.63 4.91 0.05 83.78 16:27:14 Average: 0 10.84 0.00 1.76 5.74 0.05 81.61 16:27:14 Average: 1 9.73 0.00 1.70 0.95 0.04 87.58 16:27:14 Average: 2 8.41 0.00 1.53 9.71 0.04 80.32 16:27:14 Average: 3 11.06 0.00 1.67 3.87 0.04 83.35 16:27:14 Average: 4 9.46 0.00 1.55 1.92 0.05 87.03 16:27:14 Average: 5 13.17 0.00 1.87 8.47 0.05 76.44 16:27:14 Average: 6 7.32 0.00 1.56 3.74 0.05 87.33 16:27:14 Average: 7 7.08 0.00 1.39 4.91 0.04 86.58 16:27:14 16:27:14 16:27:14