14:17:21 Started by upstream project "policy-pap-master-merge-java" build number 351 14:17:21 originally caused by: 14:17:21 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137761 14:17:21 Running as SYSTEM 14:17:21 [EnvInject] - Loading node environment variables. 14:17:21 Building remotely on prd-ubuntu1804-docker-8c-8g-27901 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 14:17:21 [ssh-agent] Looking for ssh-agent implementation... 14:17:21 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:17:21 $ ssh-agent 14:17:21 SSH_AUTH_SOCK=/tmp/ssh-sjmXBe4BcMzm/agent.2189 14:17:21 SSH_AGENT_PID=2190 14:17:21 [ssh-agent] Started. 14:17:21 Running ssh-add (command line suppressed) 14:17:21 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9080305929047286160.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9080305929047286160.key) 14:17:21 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:17:21 The recommended git tool is: NONE 14:17:23 using credential onap-jenkins-ssh 14:17:23 Wiping out workspace first. 14:17:23 Cloning the remote Git repository 14:17:23 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:17:23 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 14:17:23 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:17:23 > git --version # timeout=10 14:17:23 > git --version # 'git version 2.17.1' 14:17:23 using GIT_SSH to set credentials Gerrit user 14:17:23 Verifying host key using manually-configured host key entries 14:17:23 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:17:24 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:17:24 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:17:24 Avoid second fetch 14:17:24 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 14:17:24 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) 14:17:24 > git config core.sparsecheckout # timeout=10 14:17:24 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 14:17:24 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 14:17:24 > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 14:17:24 provisioning config files... 14:17:24 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:17:24 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:17:24 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1160451521055426948.sh 14:17:24 ---> python-tools-install.sh 14:17:24 Setup pyenv: 14:17:25 * system (set by /opt/pyenv/version) 14:17:25 * 3.8.13 (set by /opt/pyenv/version) 14:17:25 * 3.9.13 (set by /opt/pyenv/version) 14:17:25 * 3.10.6 (set by /opt/pyenv/version) 14:17:29 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-cpZN 14:17:29 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:17:33 lf-activate-venv(): INFO: Installing: lftools 14:18:11 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH 14:18:11 Generating Requirements File 14:19:00 Python 3.10.6 14:19:00 pip 24.0 from /tmp/venv-cpZN/lib/python3.10/site-packages/pip (python 3.10) 14:19:01 appdirs==1.4.4 14:19:01 argcomplete==3.3.0 14:19:01 aspy.yaml==1.3.0 14:19:01 attrs==23.2.0 14:19:01 autopage==0.5.2 14:19:01 beautifulsoup4==4.12.3 14:19:01 boto3==1.34.91 14:19:01 botocore==1.34.91 14:19:01 bs4==0.0.2 14:19:01 cachetools==5.3.3 14:19:01 certifi==2024.2.2 14:19:01 cffi==1.16.0 14:19:01 cfgv==3.4.0 14:19:01 chardet==5.2.0 14:19:01 charset-normalizer==3.3.2 14:19:01 click==8.1.7 14:19:01 cliff==4.6.0 14:19:01 cmd2==2.4.3 14:19:01 cryptography==3.3.2 14:19:01 debtcollector==3.0.0 14:19:01 decorator==5.1.1 14:19:01 defusedxml==0.7.1 14:19:01 Deprecated==1.2.14 14:19:01 distlib==0.3.8 14:19:01 dnspython==2.6.1 14:19:01 docker==4.2.2 14:19:01 dogpile.cache==1.3.2 14:19:01 email_validator==2.1.1 14:19:01 filelock==3.13.4 14:19:01 future==1.0.0 14:19:01 gitdb==4.0.11 14:19:01 GitPython==3.1.43 14:19:01 google-auth==2.29.0 14:19:01 httplib2==0.22.0 14:19:01 identify==2.5.36 14:19:01 idna==3.7 14:19:01 importlib-resources==1.5.0 14:19:01 iso8601==2.1.0 14:19:01 Jinja2==3.1.3 14:19:01 jmespath==1.0.1 14:19:01 jsonpatch==1.33 14:19:01 jsonpointer==2.4 14:19:01 jsonschema==4.21.1 14:19:01 jsonschema-specifications==2023.12.1 14:19:01 keystoneauth1==5.6.0 14:19:01 kubernetes==29.0.0 14:19:01 lftools==0.37.10 14:19:01 lxml==5.2.1 14:19:01 MarkupSafe==2.1.5 14:19:01 msgpack==1.0.8 14:19:01 multi_key_dict==2.0.3 14:19:01 munch==4.0.0 14:19:01 netaddr==1.2.1 14:19:01 netifaces==0.11.0 14:19:01 niet==1.4.2 14:19:01 nodeenv==1.8.0 14:19:01 oauth2client==4.1.3 14:19:01 oauthlib==3.2.2 14:19:01 openstacksdk==3.1.0 14:19:01 os-client-config==2.1.0 14:19:01 os-service-types==1.7.0 14:19:01 osc-lib==3.0.1 14:19:01 oslo.config==9.4.0 14:19:01 oslo.context==5.5.0 14:19:01 oslo.i18n==6.3.0 14:19:01 oslo.log==5.5.1 14:19:01 oslo.serialization==5.4.0 14:19:01 oslo.utils==7.1.0 14:19:01 packaging==24.0 14:19:01 pbr==6.0.0 14:19:01 platformdirs==4.2.1 14:19:01 prettytable==3.10.0 14:19:01 pyasn1==0.6.0 14:19:01 pyasn1_modules==0.4.0 14:19:01 pycparser==2.22 14:19:01 pygerrit2==2.0.15 14:19:01 PyGithub==2.3.0 14:19:01 pyinotify==0.9.6 14:19:01 PyJWT==2.8.0 14:19:01 PyNaCl==1.5.0 14:19:01 pyparsing==2.4.7 14:19:01 pyperclip==1.8.2 14:19:01 pyrsistent==0.20.0 14:19:01 python-cinderclient==9.5.0 14:19:01 python-dateutil==2.9.0.post0 14:19:01 python-heatclient==3.5.0 14:19:01 python-jenkins==1.8.2 14:19:01 python-keystoneclient==5.4.0 14:19:01 python-magnumclient==4.4.0 14:19:01 python-novaclient==18.6.0 14:19:01 python-openstackclient==6.6.0 14:19:01 python-swiftclient==4.5.0 14:19:01 PyYAML==6.0.1 14:19:01 referencing==0.35.0 14:19:01 requests==2.31.0 14:19:01 requests-oauthlib==2.0.0 14:19:01 requestsexceptions==1.4.0 14:19:01 rfc3986==2.0.0 14:19:01 rpds-py==0.18.0 14:19:01 rsa==4.9 14:19:01 ruamel.yaml==0.18.6 14:19:01 ruamel.yaml.clib==0.2.8 14:19:01 s3transfer==0.10.1 14:19:01 simplejson==3.19.2 14:19:01 six==1.16.0 14:19:01 smmap==5.0.1 14:19:01 soupsieve==2.5 14:19:01 stevedore==5.2.0 14:19:01 tabulate==0.9.0 14:19:01 toml==0.10.2 14:19:01 tomlkit==0.12.4 14:19:01 tqdm==4.66.2 14:19:01 typing_extensions==4.11.0 14:19:01 tzdata==2024.1 14:19:01 urllib3==1.26.18 14:19:01 virtualenv==20.26.0 14:19:01 wcwidth==0.2.13 14:19:01 websocket-client==1.8.0 14:19:01 wrapt==1.16.0 14:19:01 xdg==6.0.0 14:19:01 xmltodict==0.13.0 14:19:01 yq==3.4.1 14:19:01 [EnvInject] - Injecting environment variables from a build step. 14:19:01 [EnvInject] - Injecting as environment variables the properties content 14:19:01 SET_JDK_VERSION=openjdk17 14:19:01 GIT_URL="git://cloud.onap.org/mirror" 14:19:01 14:19:01 [EnvInject] - Variables injected successfully. 14:19:01 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins6333459876021950769.sh 14:19:01 ---> update-java-alternatives.sh 14:19:01 ---> Updating Java version 14:19:01 ---> Ubuntu/Debian system detected 14:19:01 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:19:01 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:19:01 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:19:02 openjdk version "17.0.4" 2022-07-19 14:19:02 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:19:02 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:19:02 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:19:02 [EnvInject] - Injecting environment variables from a build step. 14:19:02 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:19:02 [EnvInject] - Variables injected successfully. 14:19:02 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17700834290198915127.sh 14:19:02 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 14:19:02 + set +u 14:19:02 + save_set 14:19:02 + RUN_CSIT_SAVE_SET=ehxB 14:19:02 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 14:19:02 + '[' 1 -eq 0 ']' 14:19:02 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:19:02 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:02 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:02 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 14:19:02 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 14:19:02 + export ROBOT_VARIABLES= 14:19:02 + ROBOT_VARIABLES= 14:19:02 + export PROJECT=pap 14:19:02 + PROJECT=pap 14:19:02 + cd /w/workspace/policy-pap-master-project-csit-pap 14:19:02 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:19:02 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:19:02 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 14:19:02 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 14:19:02 + relax_set 14:19:02 + set +e 14:19:02 + set +o pipefail 14:19:02 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 14:19:02 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:19:02 +++ mktemp -d 14:19:02 ++ ROBOT_VENV=/tmp/tmp.OTxYuIrIme 14:19:02 ++ echo ROBOT_VENV=/tmp/tmp.OTxYuIrIme 14:19:02 +++ python3 --version 14:19:02 ++ echo 'Python version is: Python 3.6.9' 14:19:02 Python version is: Python 3.6.9 14:19:02 ++ python3 -m venv --clear /tmp/tmp.OTxYuIrIme 14:19:03 ++ source /tmp/tmp.OTxYuIrIme/bin/activate 14:19:03 +++ deactivate nondestructive 14:19:03 +++ '[' -n '' ']' 14:19:03 +++ '[' -n '' ']' 14:19:03 +++ '[' -n /bin/bash -o -n '' ']' 14:19:03 +++ hash -r 14:19:03 +++ '[' -n '' ']' 14:19:03 +++ unset VIRTUAL_ENV 14:19:03 +++ '[' '!' nondestructive = nondestructive ']' 14:19:03 +++ VIRTUAL_ENV=/tmp/tmp.OTxYuIrIme 14:19:03 +++ export VIRTUAL_ENV 14:19:03 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:03 +++ PATH=/tmp/tmp.OTxYuIrIme/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:03 +++ export PATH 14:19:03 +++ '[' -n '' ']' 14:19:03 +++ '[' -z '' ']' 14:19:03 +++ _OLD_VIRTUAL_PS1= 14:19:03 +++ '[' 'x(tmp.OTxYuIrIme) ' '!=' x ']' 14:19:03 +++ PS1='(tmp.OTxYuIrIme) ' 14:19:03 +++ export PS1 14:19:03 +++ '[' -n /bin/bash -o -n '' ']' 14:19:03 +++ hash -r 14:19:03 ++ set -exu 14:19:03 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 14:19:07 ++ echo 'Installing Python Requirements' 14:19:07 Installing Python Requirements 14:19:07 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 14:19:35 ++ python3 -m pip -qq freeze 14:19:35 bcrypt==4.0.1 14:19:35 beautifulsoup4==4.12.3 14:19:35 bitarray==2.9.2 14:19:35 certifi==2024.2.2 14:19:35 cffi==1.15.1 14:19:35 charset-normalizer==2.0.12 14:19:35 cryptography==40.0.2 14:19:35 decorator==5.1.1 14:19:35 elasticsearch==7.17.9 14:19:35 elasticsearch-dsl==7.4.1 14:19:35 enum34==1.1.10 14:19:35 idna==3.7 14:19:35 importlib-resources==5.4.0 14:19:35 ipaddr==2.2.0 14:19:35 isodate==0.6.1 14:19:35 jmespath==0.10.0 14:19:35 jsonpatch==1.32 14:19:35 jsonpath-rw==1.4.0 14:19:35 jsonpointer==2.3 14:19:35 lxml==5.2.1 14:19:35 netaddr==0.8.0 14:19:35 netifaces==0.11.0 14:19:35 odltools==0.1.28 14:19:35 paramiko==3.4.0 14:19:35 pkg_resources==0.0.0 14:19:35 ply==3.11 14:19:35 pyang==2.6.0 14:19:35 pyangbind==0.8.1 14:19:35 pycparser==2.21 14:19:35 pyhocon==0.3.60 14:19:35 PyNaCl==1.5.0 14:19:35 pyparsing==3.1.2 14:19:35 python-dateutil==2.9.0.post0 14:19:35 regex==2023.8.8 14:19:35 requests==2.27.1 14:19:35 robotframework==6.1.1 14:19:35 robotframework-httplibrary==0.4.2 14:19:35 robotframework-pythonlibcore==3.0.0 14:19:35 robotframework-requests==0.9.4 14:19:35 robotframework-selenium2library==3.0.0 14:19:35 robotframework-seleniumlibrary==5.1.3 14:19:35 robotframework-sshlibrary==3.8.0 14:19:35 scapy==2.5.0 14:19:35 scp==0.14.5 14:19:35 selenium==3.141.0 14:19:35 six==1.16.0 14:19:35 soupsieve==2.3.2.post1 14:19:35 urllib3==1.26.18 14:19:35 waitress==2.0.0 14:19:35 WebOb==1.8.7 14:19:35 WebTest==3.0.0 14:19:35 zipp==3.6.0 14:19:35 ++ mkdir -p /tmp/tmp.OTxYuIrIme/src/onap 14:19:35 ++ rm -rf /tmp/tmp.OTxYuIrIme/src/onap/testsuite 14:19:35 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 14:19:46 ++ echo 'Installing python confluent-kafka library' 14:19:46 Installing python confluent-kafka library 14:19:46 ++ python3 -m pip install -qq confluent-kafka 14:19:47 ++ echo 'Uninstall docker-py and reinstall docker.' 14:19:47 Uninstall docker-py and reinstall docker. 14:19:47 ++ python3 -m pip uninstall -y -qq docker 14:19:48 ++ python3 -m pip install -U -qq docker 14:19:50 ++ python3 -m pip -qq freeze 14:19:50 bcrypt==4.0.1 14:19:50 beautifulsoup4==4.12.3 14:19:50 bitarray==2.9.2 14:19:50 certifi==2024.2.2 14:19:50 cffi==1.15.1 14:19:50 charset-normalizer==2.0.12 14:19:50 confluent-kafka==2.3.0 14:19:50 cryptography==40.0.2 14:19:50 decorator==5.1.1 14:19:50 deepdiff==5.7.0 14:19:50 dnspython==2.2.1 14:19:50 docker==5.0.3 14:19:50 elasticsearch==7.17.9 14:19:50 elasticsearch-dsl==7.4.1 14:19:50 enum34==1.1.10 14:19:50 future==1.0.0 14:19:50 idna==3.7 14:19:50 importlib-resources==5.4.0 14:19:50 ipaddr==2.2.0 14:19:50 isodate==0.6.1 14:19:50 Jinja2==3.0.3 14:19:50 jmespath==0.10.0 14:19:50 jsonpatch==1.32 14:19:50 jsonpath-rw==1.4.0 14:19:50 jsonpointer==2.3 14:19:50 kafka-python==2.0.2 14:19:50 lxml==5.2.1 14:19:50 MarkupSafe==2.0.1 14:19:50 more-itertools==5.0.0 14:19:50 netaddr==0.8.0 14:19:50 netifaces==0.11.0 14:19:50 odltools==0.1.28 14:19:50 ordered-set==4.0.2 14:19:50 paramiko==3.4.0 14:19:50 pbr==6.0.0 14:19:50 pkg_resources==0.0.0 14:19:50 ply==3.11 14:19:50 protobuf==3.19.6 14:19:50 pyang==2.6.0 14:19:50 pyangbind==0.8.1 14:19:50 pycparser==2.21 14:19:50 pyhocon==0.3.60 14:19:50 PyNaCl==1.5.0 14:19:50 pyparsing==3.1.2 14:19:50 python-dateutil==2.9.0.post0 14:19:50 PyYAML==6.0.1 14:19:50 regex==2023.8.8 14:19:50 requests==2.27.1 14:19:50 robotframework==6.1.1 14:19:50 robotframework-httplibrary==0.4.2 14:19:50 robotframework-onap==0.6.0.dev105 14:19:50 robotframework-pythonlibcore==3.0.0 14:19:50 robotframework-requests==0.9.4 14:19:50 robotframework-selenium2library==3.0.0 14:19:50 robotframework-seleniumlibrary==5.1.3 14:19:50 robotframework-sshlibrary==3.8.0 14:19:50 robotlibcore-temp==1.0.2 14:19:50 scapy==2.5.0 14:19:50 scp==0.14.5 14:19:50 selenium==3.141.0 14:19:50 six==1.16.0 14:19:50 soupsieve==2.3.2.post1 14:19:50 urllib3==1.26.18 14:19:50 waitress==2.0.0 14:19:50 WebOb==1.8.7 14:19:50 websocket-client==1.3.1 14:19:50 WebTest==3.0.0 14:19:50 zipp==3.6.0 14:19:51 ++ uname 14:19:51 ++ grep -q Linux 14:19:51 ++ sudo apt-get -y -qq install libxml2-utils 14:19:51 + load_set 14:19:51 + _setopts=ehuxB 14:19:51 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 14:19:51 ++ tr : ' ' 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o braceexpand 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o hashall 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o interactive-comments 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o nounset 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o xtrace 14:19:51 ++ echo ehuxB 14:19:51 ++ sed 's/./& /g' 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +e 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +h 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +u 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +x 14:19:51 + source_safely /tmp/tmp.OTxYuIrIme/bin/activate 14:19:51 + '[' -z /tmp/tmp.OTxYuIrIme/bin/activate ']' 14:19:51 + relax_set 14:19:51 + set +e 14:19:51 + set +o pipefail 14:19:51 + . /tmp/tmp.OTxYuIrIme/bin/activate 14:19:51 ++ deactivate nondestructive 14:19:51 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 14:19:51 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:51 ++ export PATH 14:19:51 ++ unset _OLD_VIRTUAL_PATH 14:19:51 ++ '[' -n '' ']' 14:19:51 ++ '[' -n /bin/bash -o -n '' ']' 14:19:51 ++ hash -r 14:19:51 ++ '[' -n '' ']' 14:19:51 ++ unset VIRTUAL_ENV 14:19:51 ++ '[' '!' nondestructive = nondestructive ']' 14:19:51 ++ VIRTUAL_ENV=/tmp/tmp.OTxYuIrIme 14:19:51 ++ export VIRTUAL_ENV 14:19:51 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:51 ++ PATH=/tmp/tmp.OTxYuIrIme/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:19:51 ++ export PATH 14:19:51 ++ '[' -n '' ']' 14:19:51 ++ '[' -z '' ']' 14:19:51 ++ _OLD_VIRTUAL_PS1='(tmp.OTxYuIrIme) ' 14:19:51 ++ '[' 'x(tmp.OTxYuIrIme) ' '!=' x ']' 14:19:51 ++ PS1='(tmp.OTxYuIrIme) (tmp.OTxYuIrIme) ' 14:19:51 ++ export PS1 14:19:51 ++ '[' -n /bin/bash -o -n '' ']' 14:19:51 ++ hash -r 14:19:51 + load_set 14:19:51 + _setopts=hxB 14:19:51 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:19:51 ++ tr : ' ' 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o braceexpand 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o hashall 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o interactive-comments 14:19:51 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:19:51 + set +o xtrace 14:19:51 ++ echo hxB 14:19:51 ++ sed 's/./& /g' 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +h 14:19:51 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:19:51 + set +x 14:19:51 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 14:19:51 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 14:19:51 + export TEST_OPTIONS= 14:19:51 + TEST_OPTIONS= 14:19:51 ++ mktemp -d 14:19:51 + WORKDIR=/tmp/tmp.9uiB25C2Gx 14:19:51 + cd /tmp/tmp.9uiB25C2Gx 14:19:51 + docker login -u docker -p docker nexus3.onap.org:10001 14:19:53 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:19:53 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:19:53 Configure a credential helper to remove this warning. See 14:19:53 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:19:53 14:19:53 Login Succeeded 14:19:53 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:19:53 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 14:19:53 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 14:19:53 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:19:53 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:19:53 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 14:19:53 + relax_set 14:19:53 + set +e 14:19:53 + set +o pipefail 14:19:53 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:19:53 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 14:19:53 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:19:53 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 14:19:53 +++ GERRIT_BRANCH=master 14:19:53 +++ echo GERRIT_BRANCH=master 14:19:53 GERRIT_BRANCH=master 14:19:53 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 14:19:53 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 14:19:53 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 14:19:53 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 14:19:55 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 14:19:55 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 14:19:55 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:19:55 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:19:55 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 14:19:55 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 14:19:55 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 14:19:55 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:19:55 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 14:19:55 +++ grafana=false 14:19:55 +++ gui=false 14:19:55 +++ [[ 2 -gt 0 ]] 14:19:55 +++ key=apex-pdp 14:19:55 +++ case $key in 14:19:55 +++ echo apex-pdp 14:19:55 apex-pdp 14:19:55 +++ component=apex-pdp 14:19:55 +++ shift 14:19:55 +++ [[ 1 -gt 0 ]] 14:19:55 +++ key=--grafana 14:19:55 +++ case $key in 14:19:55 +++ grafana=true 14:19:55 +++ shift 14:19:55 +++ [[ 0 -gt 0 ]] 14:19:55 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 14:19:55 +++ echo 'Configuring docker compose...' 14:19:55 Configuring docker compose... 14:19:55 +++ source export-ports.sh 14:19:55 +++ source get-versions.sh 14:19:57 +++ '[' -z pap ']' 14:19:57 +++ '[' -n apex-pdp ']' 14:19:57 +++ '[' apex-pdp == logs ']' 14:19:57 +++ '[' true = true ']' 14:19:57 +++ echo 'Starting apex-pdp application with Grafana' 14:19:57 Starting apex-pdp application with Grafana 14:19:57 +++ docker-compose up -d apex-pdp grafana 14:19:59 Creating network "compose_default" with the default driver 14:19:59 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 14:20:00 latest: Pulling from prom/prometheus 14:20:05 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 14:20:05 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 14:20:05 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 14:20:06 latest: Pulling from grafana/grafana 14:20:11 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 14:20:11 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 14:20:11 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 14:20:11 10.10.2: Pulling from mariadb 14:20:16 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 14:20:16 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 14:20:16 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 14:20:16 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 14:20:20 Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 14:20:20 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 14:20:20 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 14:20:21 latest: Pulling from confluentinc/cp-zookeeper 14:21:30 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 14:21:30 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 14:21:30 Pulling kafka (confluentinc/cp-kafka:latest)... 14:21:30 latest: Pulling from confluentinc/cp-kafka 14:21:33 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 14:21:33 Status: Downloaded newer image for confluentinc/cp-kafka:latest 14:21:33 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 14:21:33 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 14:21:36 Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 14:21:36 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 14:21:36 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 14:21:36 3.1.2-SNAPSHOT: Pulling from onap/policy-api 14:21:37 Digest: sha256:73236d56a7796996901511a1cb6c2fe3204e974356a78c9761a399b0c362efb6 14:21:37 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 14:21:37 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 14:21:38 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 14:21:40 Digest: sha256:a6a581513619dfb88af12cb5f913059ca149fe42561b778b38baf001f8cfe10c 14:21:40 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 14:21:40 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 14:21:41 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 14:21:47 Digest: sha256:15db3ed25bc2c5fcac7635cebf8ee909afbd4fd846efff231410c6f1346614e7 14:21:47 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 14:21:48 Creating prometheus ... 14:21:48 Creating zookeeper ... 14:21:48 Creating simulator ... 14:21:48 Creating mariadb ... 14:22:09 Creating mariadb ... done 14:22:09 Creating policy-db-migrator ... 14:22:10 Creating policy-db-migrator ... done 14:22:10 Creating policy-api ... 14:22:11 Creating policy-api ... done 14:22:12 Creating simulator ... done 14:22:13 Creating zookeeper ... done 14:22:13 Creating kafka ... 14:22:14 Creating prometheus ... done 14:22:14 Creating grafana ... 14:22:15 Creating grafana ... done 14:22:16 Creating kafka ... done 14:22:16 Creating policy-pap ... 14:22:17 Creating policy-pap ... done 14:22:17 Creating policy-apex-pdp ... 14:22:19 Creating policy-apex-pdp ... done 14:22:19 +++ echo 'Prometheus server: http://localhost:30259' 14:22:19 Prometheus server: http://localhost:30259 14:22:19 +++ echo 'Grafana server: http://localhost:30269' 14:22:19 Grafana server: http://localhost:30269 14:22:19 +++ cd /w/workspace/policy-pap-master-project-csit-pap 14:22:19 ++ sleep 10 14:22:29 ++ unset http_proxy https_proxy 14:22:29 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 14:22:29 Waiting for REST to come up on localhost port 30003... 14:22:29 NAMES STATUS 14:22:29 policy-apex-pdp Up 10 seconds 14:22:29 policy-pap Up 11 seconds 14:22:29 grafana Up 13 seconds 14:22:29 kafka Up 12 seconds 14:22:29 policy-api Up 18 seconds 14:22:29 policy-db-migrator Up 19 seconds 14:22:29 mariadb Up 20 seconds 14:22:29 simulator Up 17 seconds 14:22:29 zookeeper Up 15 seconds 14:22:29 prometheus Up 14 seconds 14:22:34 NAMES STATUS 14:22:34 policy-apex-pdp Up 15 seconds 14:22:34 policy-pap Up 16 seconds 14:22:34 grafana Up 18 seconds 14:22:34 kafka Up 17 seconds 14:22:34 policy-api Up 23 seconds 14:22:34 policy-db-migrator Up 24 seconds 14:22:34 mariadb Up 25 seconds 14:22:34 simulator Up 22 seconds 14:22:34 zookeeper Up 21 seconds 14:22:34 prometheus Up 19 seconds 14:22:39 NAMES STATUS 14:22:39 policy-apex-pdp Up 20 seconds 14:22:39 policy-pap Up 21 seconds 14:22:39 grafana Up 23 seconds 14:22:39 kafka Up 22 seconds 14:22:39 policy-api Up 28 seconds 14:22:39 mariadb Up 30 seconds 14:22:39 simulator Up 27 seconds 14:22:39 zookeeper Up 26 seconds 14:22:39 prometheus Up 25 seconds 14:22:44 NAMES STATUS 14:22:44 policy-apex-pdp Up 25 seconds 14:22:44 policy-pap Up 26 seconds 14:22:44 grafana Up 28 seconds 14:22:44 kafka Up 27 seconds 14:22:44 policy-api Up 33 seconds 14:22:44 mariadb Up 35 seconds 14:22:44 simulator Up 32 seconds 14:22:44 zookeeper Up 31 seconds 14:22:44 prometheus Up 30 seconds 14:22:49 NAMES STATUS 14:22:49 policy-apex-pdp Up 30 seconds 14:22:49 policy-pap Up 31 seconds 14:22:49 grafana Up 33 seconds 14:22:49 kafka Up 32 seconds 14:22:49 policy-api Up 38 seconds 14:22:49 mariadb Up 40 seconds 14:22:49 simulator Up 37 seconds 14:22:49 zookeeper Up 36 seconds 14:22:49 prometheus Up 35 seconds 14:22:54 NAMES STATUS 14:22:54 policy-apex-pdp Up 35 seconds 14:22:54 policy-pap Up 36 seconds 14:22:54 grafana Up 38 seconds 14:22:54 kafka Up 37 seconds 14:22:54 policy-api Up 43 seconds 14:22:54 mariadb Up 45 seconds 14:22:54 simulator Up 42 seconds 14:22:54 zookeeper Up 41 seconds 14:22:54 prometheus Up 40 seconds 14:22:59 NAMES STATUS 14:22:59 policy-apex-pdp Up 40 seconds 14:22:59 policy-pap Up 41 seconds 14:22:59 grafana Up 43 seconds 14:22:59 kafka Up 42 seconds 14:22:59 policy-api Up 48 seconds 14:22:59 mariadb Up 50 seconds 14:22:59 simulator Up 47 seconds 14:22:59 zookeeper Up 46 seconds 14:22:59 prometheus Up 45 seconds 14:22:59 ++ export 'SUITES=pap-test.robot 14:22:59 pap-slas.robot' 14:22:59 ++ SUITES='pap-test.robot 14:22:59 pap-slas.robot' 14:22:59 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:22:59 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 14:22:59 + load_set 14:22:59 + _setopts=hxB 14:22:59 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:22:59 ++ tr : ' ' 14:22:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:22:59 + set +o braceexpand 14:22:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:22:59 + set +o hashall 14:22:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:22:59 + set +o interactive-comments 14:22:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:22:59 + set +o xtrace 14:22:59 ++ echo hxB 14:22:59 ++ sed 's/./& /g' 14:22:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:22:59 + set +h 14:22:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:22:59 + set +x 14:22:59 + docker_stats 14:22:59 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 14:22:59 ++ uname -s 14:22:59 + '[' Linux == Darwin ']' 14:22:59 + sh -c 'top -bn1 | head -3' 14:22:59 top - 14:22:59 up 6 min, 0 users, load average: 3.58, 2.07, 0.92 14:22:59 Tasks: 202 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 14:22:59 %Cpu(s): 8.8 us, 1.9 sy, 0.0 ni, 80.5 id, 8.8 wa, 0.0 hi, 0.0 si, 0.0 st 14:22:59 + echo 14:22:59 + sh -c 'free -h' 14:22:59 14:22:59 total used free shared buff/cache available 14:22:59 Mem: 31G 2.6G 22G 1.3M 6.0G 28G 14:22:59 Swap: 1.0G 0B 1.0G 14:22:59 + echo 14:22:59 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:22:59 14:22:59 NAMES STATUS 14:22:59 policy-apex-pdp Up 40 seconds 14:22:59 policy-pap Up 42 seconds 14:22:59 grafana Up 44 seconds 14:22:59 kafka Up 43 seconds 14:22:59 policy-api Up 48 seconds 14:22:59 mariadb Up 50 seconds 14:22:59 simulator Up 47 seconds 14:22:59 zookeeper Up 46 seconds 14:22:59 prometheus Up 45 seconds 14:22:59 + echo 14:22:59 14:22:59 + docker stats --no-stream 14:23:02 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:23:02 1df5e839969b policy-apex-pdp 1.56% 172.3MiB / 31.41GiB 0.54% 7.8kB / 7.61kB 0B / 0B 48 14:23:02 9cc97d748771 policy-pap 3.01% 559.9MiB / 31.41GiB 1.74% 34.2kB / 35.6kB 0B / 149MB 62 14:23:02 0d922879188a grafana 0.03% 54.15MiB / 31.41GiB 0.17% 18.5kB / 3.18kB 0B / 24.8MB 19 14:23:02 09cfce7a987a kafka 0.49% 390.6MiB / 31.41GiB 1.21% 70.2kB / 73.8kB 0B / 508kB 84 14:23:02 7f3faa87ecf9 policy-api 0.11% 465MiB / 31.41GiB 1.45% 989kB / 648kB 0B / 0B 52 14:23:02 bd0e7e07829a mariadb 0.03% 102.4MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 67.9MB 36 14:23:02 259d80ebd636 simulator 0.07% 122.7MiB / 31.41GiB 0.38% 1.27kB / 0B 98.3kB / 0B 76 14:23:02 db21b226f583 zookeeper 0.10% 99.86MiB / 31.41GiB 0.31% 54.5kB / 47.7kB 0B / 389kB 60 14:23:02 742612bf9a64 prometheus 0.00% 19.14MiB / 31.41GiB 0.06% 1.52kB / 432B 0B / 0B 13 14:23:02 + echo 14:23:02 14:23:02 + cd /tmp/tmp.9uiB25C2Gx 14:23:02 + echo 'Reading the testplan:' 14:23:02 Reading the testplan: 14:23:02 + echo 'pap-test.robot 14:23:02 pap-slas.robot' 14:23:02 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 14:23:02 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 14:23:02 + cat testplan.txt 14:23:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 14:23:02 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 14:23:02 ++ xargs 14:23:02 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 14:23:02 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:23:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 14:23:02 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:23:02 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:23:02 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 14:23:02 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 14:23:02 + relax_set 14:23:02 + set +e 14:23:02 + set +o pipefail 14:23:02 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 14:23:02 ============================================================================== 14:23:02 pap 14:23:02 ============================================================================== 14:23:02 pap.Pap-Test 14:23:02 ============================================================================== 14:23:03 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 14:23:03 ------------------------------------------------------------------------------ 14:23:04 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 14:23:04 ------------------------------------------------------------------------------ 14:23:04 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 14:23:04 ------------------------------------------------------------------------------ 14:23:05 Healthcheck :: Verify policy pap health check | PASS | 14:23:05 ------------------------------------------------------------------------------ 14:23:25 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 14:23:25 ------------------------------------------------------------------------------ 14:23:25 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 14:23:25 ------------------------------------------------------------------------------ 14:23:26 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 14:23:26 ------------------------------------------------------------------------------ 14:23:26 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 14:23:26 ------------------------------------------------------------------------------ 14:23:26 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 14:23:26 ------------------------------------------------------------------------------ 14:23:26 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 14:23:26 ------------------------------------------------------------------------------ 14:23:27 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 14:23:27 ------------------------------------------------------------------------------ 14:23:27 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 14:23:27 ------------------------------------------------------------------------------ 14:23:27 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 14:23:27 ------------------------------------------------------------------------------ 14:23:27 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 14:23:27 ------------------------------------------------------------------------------ 14:23:27 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 14:23:27 ------------------------------------------------------------------------------ 14:23:28 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 14:23:28 ------------------------------------------------------------------------------ 14:23:28 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 14:23:28 ------------------------------------------------------------------------------ 14:23:48 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 14:23:48 pdpTypeC != pdpTypeA 14:23:48 ------------------------------------------------------------------------------ 14:23:48 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 14:23:48 ------------------------------------------------------------------------------ 14:23:48 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 14:23:48 ------------------------------------------------------------------------------ 14:23:48 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 14:23:48 ------------------------------------------------------------------------------ 14:23:48 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 14:23:48 ------------------------------------------------------------------------------ 14:23:48 pap.Pap-Test | FAIL | 14:23:48 22 tests, 21 passed, 1 failed 14:23:48 ============================================================================== 14:23:48 pap.Pap-Slas 14:23:48 ============================================================================== 14:24:48 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 14:24:48 ------------------------------------------------------------------------------ 14:24:48 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 14:24:48 ------------------------------------------------------------------------------ 14:24:48 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 14:24:48 ------------------------------------------------------------------------------ 14:24:48 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 14:24:48 ------------------------------------------------------------------------------ 14:24:48 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 14:24:48 ------------------------------------------------------------------------------ 14:24:49 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 14:24:49 ------------------------------------------------------------------------------ 14:24:49 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 14:24:49 ------------------------------------------------------------------------------ 14:24:49 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 14:24:49 ------------------------------------------------------------------------------ 14:24:49 pap.Pap-Slas | PASS | 14:24:49 8 tests, 8 passed, 0 failed 14:24:49 ============================================================================== 14:24:49 pap | FAIL | 14:24:49 30 tests, 29 passed, 1 failed 14:24:49 ============================================================================== 14:24:49 Output: /tmp/tmp.9uiB25C2Gx/output.xml 14:24:49 Log: /tmp/tmp.9uiB25C2Gx/log.html 14:24:49 Report: /tmp/tmp.9uiB25C2Gx/report.html 14:24:49 + RESULT=1 14:24:49 + load_set 14:24:49 + _setopts=hxB 14:24:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:24:49 ++ tr : ' ' 14:24:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:24:49 + set +o braceexpand 14:24:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:24:49 + set +o hashall 14:24:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:24:49 + set +o interactive-comments 14:24:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:24:49 + set +o xtrace 14:24:49 ++ echo hxB 14:24:49 ++ sed 's/./& /g' 14:24:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:24:49 + set +h 14:24:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:24:49 + set +x 14:24:49 + echo 'RESULT: 1' 14:24:49 RESULT: 1 14:24:49 + exit 1 14:24:49 + on_exit 14:24:49 + rc=1 14:24:49 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 14:24:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:24:49 NAMES STATUS 14:24:49 policy-apex-pdp Up 2 minutes 14:24:49 policy-pap Up 2 minutes 14:24:49 grafana Up 2 minutes 14:24:49 kafka Up 2 minutes 14:24:49 policy-api Up 2 minutes 14:24:49 mariadb Up 2 minutes 14:24:49 simulator Up 2 minutes 14:24:49 zookeeper Up 2 minutes 14:24:49 prometheus Up 2 minutes 14:24:49 + docker_stats 14:24:49 ++ uname -s 14:24:49 + '[' Linux == Darwin ']' 14:24:49 + sh -c 'top -bn1 | head -3' 14:24:49 top - 14:24:49 up 8 min, 0 users, load average: 0.94, 1.65, 0.90 14:24:49 Tasks: 200 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 14:24:49 %Cpu(s): 7.8 us, 1.6 sy, 0.0 ni, 83.3 id, 7.2 wa, 0.0 hi, 0.0 si, 0.0 st 14:24:49 + echo 14:24:49 14:24:49 + sh -c 'free -h' 14:24:49 total used free shared buff/cache available 14:24:49 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 14:24:49 Swap: 1.0G 0B 1.0G 14:24:49 + echo 14:24:49 14:24:49 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:24:49 NAMES STATUS 14:24:49 policy-apex-pdp Up 2 minutes 14:24:49 policy-pap Up 2 minutes 14:24:49 grafana Up 2 minutes 14:24:49 kafka Up 2 minutes 14:24:49 policy-api Up 2 minutes 14:24:49 mariadb Up 2 minutes 14:24:49 simulator Up 2 minutes 14:24:49 zookeeper Up 2 minutes 14:24:49 prometheus Up 2 minutes 14:24:49 + echo 14:24:49 14:24:49 + docker stats --no-stream 14:24:52 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:24:52 1df5e839969b policy-apex-pdp 1.12% 177.9MiB / 31.41GiB 0.55% 55.9kB / 79.8kB 0B / 0B 52 14:24:52 9cc97d748771 policy-pap 0.54% 472.9MiB / 31.41GiB 1.47% 2.47MB / 1.05MB 0B / 149MB 66 14:24:52 0d922879188a grafana 0.03% 55.11MiB / 31.41GiB 0.17% 21.7kB / 4.6kB 0B / 24.9MB 19 14:24:52 09cfce7a987a kafka 1.01% 392.1MiB / 31.41GiB 1.22% 237kB / 214kB 0B / 606kB 85 14:24:52 7f3faa87ecf9 policy-api 0.11% 516.5MiB / 31.41GiB 1.61% 2.45MB / 1.1MB 0B / 0B 55 14:24:52 bd0e7e07829a mariadb 0.02% 103.6MiB / 31.41GiB 0.32% 2.02MB / 4.88MB 11MB / 68.1MB 27 14:24:52 259d80ebd636 simulator 0.07% 122.9MiB / 31.41GiB 0.38% 1.5kB / 0B 98.3kB / 0B 78 14:24:52 db21b226f583 zookeeper 0.08% 98.07MiB / 31.41GiB 0.30% 57.3kB / 49.2kB 0B / 389kB 60 14:24:52 742612bf9a64 prometheus 0.00% 25.72MiB / 31.41GiB 0.08% 180kB / 10.3kB 0B / 0B 14 14:24:52 + echo 14:24:52 14:24:52 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 14:24:52 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 14:24:52 + relax_set 14:24:52 + set +e 14:24:52 + set +o pipefail 14:24:52 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 14:24:52 ++ echo 'Shut down started!' 14:24:52 Shut down started! 14:24:52 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:24:52 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 14:24:52 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 14:24:52 ++ source export-ports.sh 14:24:52 ++ source get-versions.sh 14:24:55 ++ echo 'Collecting logs from docker compose containers...' 14:24:55 Collecting logs from docker compose containers... 14:24:55 ++ docker-compose logs 14:24:57 ++ cat docker_compose.log 14:24:57 Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, mariadb, simulator, zookeeper, prometheus 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897653053Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T14:22:15Z 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897865477Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897876017Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897879567Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897884117Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897887547Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897890817Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897894087Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897897567Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897901187Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897904427Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897908217Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897912627Z level=info msg=Target target=[all] 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897925028Z level=info msg="Path Home" path=/usr/share/grafana 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897927958Z level=info msg="Path Data" path=/var/lib/grafana 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897930838Z level=info msg="Path Logs" path=/var/log/grafana 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897933788Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897936948Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 14:24:57 grafana | logger=settings t=2024-04-25T14:22:15.897940518Z level=info msg="App mode production" 14:24:57 grafana | logger=sqlstore t=2024-04-25T14:22:15.898212702Z level=info msg="Connecting to DB" dbtype=sqlite3 14:24:57 grafana | logger=sqlstore t=2024-04-25T14:22:15.898232752Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.898822641Z level=info msg="Starting DB migrations" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.899727076Z level=info msg="Executing migration" id="create migration_log table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.900544379Z level=info msg="Migration successfully executed" id="create migration_log table" duration=816.943µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.904687174Z level=info msg="Executing migration" id="create user table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.905256583Z level=info msg="Migration successfully executed" id="create user table" duration=566.169µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.908645836Z level=info msg="Executing migration" id="add unique index user.login" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.909398708Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=749.501µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.917371893Z level=info msg="Executing migration" id="add unique index user.email" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.91848018Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.107907ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.923206914Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.92422418Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.017166ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.929770957Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 14:24:57 mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:24:57 mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 14:24:57 mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:24:57 mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Initializing database files 14:24:57 mariadb | 2024-04-25 14:22:09 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:24:57 mariadb | 2024-04-25 14:22:09 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:24:57 mariadb | 2024-04-25 14:22:09 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:24:57 mariadb | 14:24:57 mariadb | 14:24:57 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 14:24:57 mariadb | To do so, start the server, then issue the following command: 14:24:57 mariadb | 14:24:57 mariadb | '/usr/bin/mysql_secure_installation' 14:24:57 mariadb | 14:24:57 mariadb | which will also give you the option of removing the test 14:24:57 mariadb | databases and anonymous user created by default. This is 14:24:57 mariadb | strongly recommended for production servers. 14:24:57 mariadb | 14:24:57 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 14:24:57 mariadb | 14:24:57 mariadb | Please report any problems at https://mariadb.org/jira 14:24:57 mariadb | 14:24:57 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 14:24:57 mariadb | 14:24:57 mariadb | Consider joining MariaDB's strong and vibrant community: 14:24:57 mariadb | https://mariadb.org/get-involved/ 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Database files initialized 14:24:57 mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Starting temporary server 14:24:57 mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Waiting for server startup 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Number of transaction pools: 1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.930383138Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=608.201µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.939269887Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.943112687Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.84171ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.94839054Z level=info msg="Executing migration" id="create user table v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.949187022Z level=info msg="Migration successfully executed" id="create user table v2" duration=796.122µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.952346612Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.95346196Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.115159ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.959707257Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.960810064Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.102427ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.965488108Z level=info msg="Executing migration" id="copy data_source v1 to v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.965881364Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=393.326µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.97005605Z level=info msg="Executing migration" id="Drop old table user_v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.970781251Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=720.971µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.977684149Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.979522348Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.842139ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.985893858Z level=info msg="Executing migration" id="Update user table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.985919928Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.81µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.990776215Z level=info msg="Executing migration" id="Add last_seen_at column to user" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.992456182Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.680297ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.998134041Z level=info msg="Executing migration" id="Add missing user data" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:15.998369705Z level=info msg="Migration successfully executed" id="Add missing user data" duration=236.064µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.064084864Z level=info msg="Executing migration" id="Add is_disabled column to user" 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Completed initialization of buffer pool 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: 128 rollback segments are active. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: log sequence number 46590; transaction id 14 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] Plugin 'FEEDBACK' is disabled. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 14:24:57 mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd: ready for connections. 14:24:57 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 14:24:57 mariadb | 2024-04-25 14:22:12+00:00 [Note] [Entrypoint]: Temporary server started. 14:24:57 mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: Creating user policy_user 14:24:57 mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 14:24:57 mariadb | 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 14:24:57 mariadb | 2024-04-25 14:22:14+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 14:24:57 mariadb | #!/bin/bash -xv 14:24:57 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 14:24:57 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 14:24:57 mariadb | # 14:24:57 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 14:24:57 mariadb | # you may not use this file except in compliance with the License. 14:24:57 mariadb | # You may obtain a copy of the License at 14:24:57 mariadb | # 14:24:57 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 14:24:57 mariadb | # 14:24:57 mariadb | # Unless required by applicable law or agreed to in writing, software 14:24:57 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 14:24:57 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14:24:57 mariadb | # See the License for the specific language governing permissions and 14:24:57 mariadb | # limitations under the License. 14:24:57 mariadb | 14:24:57 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | do 14:24:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 14:24:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 14:24:57 mariadb | done 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:24:57 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 14:24:57 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:24:57 mariadb | 14:24:57 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 14:24:57 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 14:24:57 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 14:24:57 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: Stopping temporary server 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: FTS optimize thread exiting. 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Starting shutdown... 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 14:22:15 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Shutdown completed; log sequence number 328053; transaction id 298 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd: Shutdown complete 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: Temporary server stopped 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 14:24:57 mariadb | 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Number of transaction pools: 1 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Completed initialization of buffer pool 14:24:57 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: 128 rollback segments are active. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: log sequence number 328053; transaction id 299 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] Plugin 'FEEDBACK' is disabled. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] Server socket created on IP: '0.0.0.0'. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] Server socket created on IP: '::'. 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] mariadbd: ready for connections. 14:24:57 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 14:24:57 mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 14:22:16 14:24:57 mariadb | 2024-04-25 14:22:17 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 14:24:57 mariadb | 2024-04-25 14:22:17 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 14:24:57 mariadb | 2024-04-25 14:22:17 41 [Warning] Aborted connection 41 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 14:24:57 mariadb | 2024-04-25 14:22:19 58 [Warning] Aborted connection 58 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 14:24:57 kafka | ===> User 14:24:57 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:24:57 kafka | ===> Configuring ... 14:24:57 kafka | Running in Zookeeper mode... 14:24:57 kafka | ===> Running preflight checks ... 14:24:57 kafka | ===> Check if /var/lib/kafka/data is writable ... 14:24:57 kafka | ===> Check if Zookeeper is healthy ... 14:24:57 kafka | [2024-04-25 14:22:20,970] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,970] INFO Client environment:host.name=09cfce7a987a (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,970] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,974] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:20,977] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:24:57 kafka | [2024-04-25 14:22:20,980] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:24:57 kafka | [2024-04-25 14:22:20,986] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:21,006] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:21,007] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:21,015] INFO Socket connection established, initiating session, client: /172.17.0.8:53854, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:21,114] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000005a2800000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:21,252] INFO Session: 0x1000005a2800000 closed (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:21,252] INFO EventThread shut down for session: 0x1000005a2800000 (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | Using log4j config /etc/kafka/log4j.properties 14:24:57 kafka | ===> Launching ... 14:24:57 kafka | ===> Launching kafka ... 14:24:57 kafka | [2024-04-25 14:22:22,028] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 14:24:57 kafka | [2024-04-25 14:22:22,349] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.065881588Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.796934ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.073115232Z level=info msg="Executing migration" id="Add index user.login/user.email" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.073816991Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=701.559µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.078256938Z level=info msg="Executing migration" id="Add is_service_account column to user" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.080014211Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.765173ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.08450312Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.098214547Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.704117ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.102665305Z level=info msg="Executing migration" id="Add uid column to user" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.103719398Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.053663ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.231538767Z level=info msg="Executing migration" id="Update uid column values for users" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.231997293Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=372.325µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.24711217Z level=info msg="Executing migration" id="Add unique index user_uid" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.248234844Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.149465ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.259712693Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.260357211Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=643.718µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.268897511Z level=info msg="Executing migration" id="create temp user table v1-7" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.270145488Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.247937ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.273858616Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.274950241Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.091465ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.279906115Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.280741296Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=836.421µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.286694003Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.287367722Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=673.569µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.290540833Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.291719038Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.177986ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.295054912Z level=info msg="Executing migration" id="Update temp_user table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.295087232Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=33.65µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.301594706Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.302221534Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=626.998µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.308871571Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.309647791Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=774.71µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.314720407Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.316028093Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.308506ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.32113311Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.322602599Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.468969ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.326716922Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.330520551Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.803929ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.33422371Z level=info msg="Executing migration" id="create temp_user v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.335116181Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=891.801µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.339808583Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.340641053Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=832µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.34428319Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.345097921Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=814.421µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.348663587Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.350383399Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.719482ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.354690535Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.355516376Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=825.781µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.360372949Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.360782684Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=409.695µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.364179118Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.3650242Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=844.582µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.368403093Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.368981261Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=578.348µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.373914075Z level=info msg="Executing migration" id="create star table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.374664494Z level=info msg="Migration successfully executed" id="create star table" duration=747.539µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.378606046Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.379467337Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=860.741µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.38279508Z level=info msg="Executing migration" id="create org table v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.38361111Z level=info msg="Migration successfully executed" id="create org table v1" duration=814.5µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.386924144Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.387682614Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=757.89µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.392306223Z level=info msg="Executing migration" id="create org_user table v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.393056253Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=750.02µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.396472047Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.397310128Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=836.831µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.4006064Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.402121181Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.51384ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.405798608Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.406571469Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=772.641µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.411091867Z level=info msg="Executing migration" id="Update org table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.411136607Z level=info msg="Migration successfully executed" id="Update org table charset" duration=43.7µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.415476684Z level=info msg="Executing migration" id="Update org_user table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.415518685Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=43.111µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.418806127Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.419064111Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=258.274µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.422617996Z level=info msg="Executing migration" id="create dashboard table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.423801862Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.183376ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.428261739Z level=info msg="Executing migration" id="add index dashboard.account_id" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.429341074Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.079185ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.433016011Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.434313808Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.297587ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.438794896Z level=info msg="Executing migration" id="create dashboard_tag table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.43986738Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.072024ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.444614042Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.445368272Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=753.669µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.448829077Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.44984662Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.017273ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.507886343Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.517474537Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.586404ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.522977209Z level=info msg="Executing migration" id="create dashboard v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.524412068Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.433659ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.52848311Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.529332241Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=848.651µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.533409414Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.534543698Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.134854ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.539765276Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.540124591Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=363.285µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.543746819Z level=info msg="Executing migration" id="drop table dashboard_v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.544876533Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.128164ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.549706686Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.549842138Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=135.952µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.553554416Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.555519611Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.964335ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.559529153Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.562631863Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.10183ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.567497696Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.570408514Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.910018ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.575954516Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.576894788Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=940.762µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.581139934Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.583020069Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.879714ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.587397305Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.588357247Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=959.552µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.592621542Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.593430703Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=808.851µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.597477976Z level=info msg="Executing migration" id="Update dashboard table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.597505336Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.3µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.601447277Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.601474227Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.01µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.606192269Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.609317279Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.12445ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.612950926Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.615342097Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.376441ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.620380273Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.622401079Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.020156ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.626907457Z level=info msg="Executing migration" id="Add column uid in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.628935834Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.027587ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.632146435Z level=info msg="Executing migration" id="Update uid column values in dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.632392718Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=246.723µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.635518819Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.63631217Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=793.181µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.639696733Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.641352674Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.656091ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.646011955Z level=info msg="Executing migration" id="Update dashboard title length" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.646049426Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.35µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.650054657Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.650908099Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=853.222µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.653939898Z level=info msg="Executing migration" id="create dashboard_provisioning" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.654677648Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=737.11µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.659040945Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.664538826Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.497421ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.668370395Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.669114925Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=745.179µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.672280536Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.673087127Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=806.422µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.678284144Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.679050254Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=765.99µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.683475171Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.683768226Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=290.965µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.686568462Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.687110559Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=541.987µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.691595027Z level=info msg="Executing migration" id="Add check_sum column" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.693757765Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.162308ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.698940892Z level=info msg="Executing migration" id="Add index for dashboard_title" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.699746702Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=805.36µs 14:24:57 policy-api | Waiting for mariadb port 3306... 14:24:57 policy-api | mariadb (172.17.0.3:3306) open 14:24:57 policy-api | Waiting for policy-db-migrator port 6824... 14:24:57 policy-api | policy-db-migrator (172.17.0.6:6824) open 14:24:57 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 14:24:57 policy-api | 14:24:57 policy-api | . ____ _ __ _ _ 14:24:57 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:24:57 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:24:57 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:24:57 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 14:24:57 policy-api | =========|_|==============|___/=/_/_/_/ 14:24:57 policy-api | :: Spring Boot :: (v3.1.10) 14:24:57 policy-api | 14:24:57 policy-api | [2024-04-25T14:22:36.612+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 14:24:57 policy-api | [2024-04-25T14:22:36.673+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 34 (/app/api.jar started by policy in /opt/app/policy/api/bin) 14:24:57 policy-api | [2024-04-25T14:22:36.674+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 14:24:57 policy-api | [2024-04-25T14:22:38.523+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:24:57 policy-api | [2024-04-25T14:22:38.600+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 68 ms. Found 6 JPA repository interfaces. 14:24:57 policy-api | [2024-04-25T14:22:38.992+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 14:24:57 policy-api | [2024-04-25T14:22:38.992+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 14:24:57 policy-api | [2024-04-25T14:22:39.634+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:24:57 policy-api | [2024-04-25T14:22:39.644+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:24:57 policy-api | [2024-04-25T14:22:39.646+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:24:57 policy-api | [2024-04-25T14:22:39.646+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 14:24:57 policy-api | [2024-04-25T14:22:39.743+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 14:24:57 policy-api | [2024-04-25T14:22:39.744+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3004 ms 14:24:57 policy-api | [2024-04-25T14:22:40.170+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:24:57 policy-api | [2024-04-25T14:22:40.248+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 14:24:57 policy-api | [2024-04-25T14:22:40.301+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:24:57 policy-api | [2024-04-25T14:22:40.600+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:24:57 policy-api | [2024-04-25T14:22:40.630+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:24:57 policy-api | [2024-04-25T14:22:40.742+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7718a40f 14:24:57 policy-api | [2024-04-25T14:22:40.744+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:24:57 policy-api | [2024-04-25T14:22:42.831+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:24:57 policy-api | [2024-04-25T14:22:42.835+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:24:57 policy-api | [2024-04-25T14:22:43.798+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 14:24:57 policy-api | [2024-04-25T14:22:44.638+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 14:24:57 policy-api | [2024-04-25T14:22:45.734+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:24:57 policy-api | [2024-04-25T14:22:45.953+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@9b43134, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1ae2b0d0, org.springframework.security.web.context.SecurityContextHolderFilter@1e4cf0e5, org.springframework.security.web.header.HeaderWriterFilter@7f930614, org.springframework.security.web.authentication.logout.LogoutFilter@72e6e93, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1aef48f0, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@12919b7b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3033e54c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@631c244c, org.springframework.security.web.access.ExceptionTranslationFilter@7d6d93f9, org.springframework.security.web.access.intercept.AuthorizationFilter@750190d0] 14:24:57 policy-api | [2024-04-25T14:22:46.770+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:24:57 policy-api | [2024-04-25T14:22:46.860+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:24:57 policy-api | [2024-04-25T14:22:46.887+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 14:24:57 policy-api | [2024-04-25T14:22:46.909+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.235 seconds (process running for 11.834) 14:24:57 policy-api | [2024-04-25T14:23:02.778+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:24:57 policy-api | [2024-04-25T14:23:02.778+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 14:24:57 policy-api | [2024-04-25T14:23:02.779+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 14:24:57 policy-api | [2024-04-25T14:23:03.143+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 14:24:57 policy-api | [] 14:24:57 policy-db-migrator | Waiting for mariadb port 3306... 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:24:57 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 14:24:57 policy-db-migrator | 321 blocks 14:24:57 policy-db-migrator | Preparing upgrade release version: 0800 14:24:57 policy-db-migrator | Preparing upgrade release version: 0900 14:24:57 policy-db-migrator | Preparing upgrade release version: 1000 14:24:57 policy-db-migrator | Preparing upgrade release version: 1100 14:24:57 policy-db-migrator | Preparing upgrade release version: 1200 14:24:57 policy-db-migrator | Preparing upgrade release version: 1300 14:24:57 policy-db-migrator | Done 14:24:57 policy-db-migrator | name version 14:24:57 policy-db-migrator | policyadmin 0 14:24:57 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 14:24:57 policy-db-migrator | upgrade: 0 -> 1300 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.703279568Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.703479271Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=199.573µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.707794257Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.707959049Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=164.632µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.712287176Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.713074846Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=786.139µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.717502763Z level=info msg="Executing migration" id="Add isPublic for dashboard" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.71961885Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.115557ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.723089726Z level=info msg="Executing migration" id="create data_source table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.723993807Z level=info msg="Migration successfully executed" id="create data_source table" duration=904.541µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.783046213Z level=info msg="Executing migration" id="add index data_source.account_id" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.784047626Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.004433ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.816503568Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.817274897Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=773.98µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.822269552Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.82290926Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=639.818µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.828752596Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.830112934Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.357958ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.837243276Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.845410332Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.167506ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.85213011Z level=info msg="Executing migration" id="create data_source table v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.853148883Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.018323ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.857188565Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.858158878Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=970.173µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.972147616Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.973350532Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.202396ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.979738555Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.980421484Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=682.949µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.984775671Z level=info msg="Executing migration" id="Add column with_credentials" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.987601518Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.825207ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.994870772Z level=info msg="Executing migration" id="Add secure json data column" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:16.997410404Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.538812ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.006774287Z level=info msg="Executing migration" id="Update data_source table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.006918739Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=145.392µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.010634547Z level=info msg="Executing migration" id="Update initial version to 1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.010973241Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=337.394µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.015242508Z level=info msg="Executing migration" id="Add read_only data column" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.019459582Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.214914ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.026407804Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.026707028Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=300.064µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.034050534Z level=info msg="Executing migration" id="Update json_data with nulls" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.034298187Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=249.933µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.03836577Z level=info msg="Executing migration" id="Add uid column" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.044342668Z level=info msg="Migration successfully executed" id="Add uid column" duration=5.973828ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.048920299Z level=info msg="Executing migration" id="Update uid value" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.049213712Z level=info msg="Migration successfully executed" id="Update uid value" duration=296.423µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.058755157Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.05966357Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=908.473µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.065639787Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.066401267Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=760.61µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.072889872Z level=info msg="Executing migration" id="create api_key table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.073709473Z level=info msg="Migration successfully executed" id="create api_key table" duration=819.211µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.078985013Z level=info msg="Executing migration" id="add index api_key.account_id" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.08034339Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.358587ms 14:24:57 kafka | [2024-04-25 14:22:22,416] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 14:24:57 kafka | [2024-04-25 14:22:22,417] INFO starting (kafka.server.KafkaServer) 14:24:57 kafka | [2024-04-25 14:22:22,418] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 14:24:57 kafka | [2024-04-25 14:22:22,430] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:host.name=09cfce7a987a (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.087648906Z level=info msg="Executing migration" id="add index api_key.key" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.088981253Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.332007ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.093480523Z level=info msg="Executing migration" id="add index api_key.account_id_name" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.094796459Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.318496ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.099052825Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.100035848Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=985.903µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.103864468Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.104615838Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=751.81µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.112592573Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.113995921Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.407198ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.123459505Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.133520686Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.057152ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.140341725Z level=info msg="Executing migration" id="create api_key table v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.140932884Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=591.509µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.145680085Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.146441286Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=758.751µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.153556659Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.154719905Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.163326ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.159622168Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.160594052Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=972.784µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.165041879Z level=info msg="Executing migration" id="copy api_key v1 to v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.165410914Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=369.535µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.171655976Z level=info msg="Executing migration" id="Drop old table api_key_v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.172611609Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=959.672µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.177827497Z level=info msg="Executing migration" id="Update api_key table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.177880358Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=49.551µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.183708034Z level=info msg="Executing migration" id="Add expires to api_key table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.186317448Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.610404ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.191881131Z level=info msg="Executing migration" id="Add service account foreign key" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.194332914Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.451653ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.197647677Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.197823929Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=176.752µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.204177152Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.207663538Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.485536ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.210993452Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.213607916Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.613884ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.216653426Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.217420916Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=767.36µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.222795336Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.223478475Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=683.159µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.226979601Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.22846685Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.48844ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.231884645Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.233351635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.46655ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.239485405Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.240428867Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=943.382µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.243812321Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.245085458Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.271707ms 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.248471842Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.248668725Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=196.153µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.254511101Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.254539682Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.341µs 14:24:57 policy-apex-pdp | Waiting for mariadb port 3306... 14:24:57 policy-apex-pdp | mariadb (172.17.0.3:3306) open 14:24:57 policy-apex-pdp | Waiting for kafka port 9092... 14:24:57 policy-apex-pdp | kafka (172.17.0.8:9092) open 14:24:57 policy-apex-pdp | Waiting for pap port 6969... 14:24:57 policy-apex-pdp | pap (172.17.0.10:6969) open 14:24:57 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 14:24:57 policy-apex-pdp | [2024-04-25T14:22:58.829+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.037+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 policy-apex-pdp | allow.auto.create.topics = true 14:24:57 policy-apex-pdp | auto.commit.interval.ms = 5000 14:24:57 policy-apex-pdp | auto.include.jmx.reporter = true 14:24:57 policy-apex-pdp | auto.offset.reset = latest 14:24:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:24:57 policy-apex-pdp | check.crcs = true 14:24:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:24:57 policy-apex-pdp | client.id = consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-1 14:24:57 policy-apex-pdp | client.rack = 14:24:57 policy-apex-pdp | connections.max.idle.ms = 540000 14:24:57 policy-apex-pdp | default.api.timeout.ms = 60000 14:24:57 policy-apex-pdp | enable.auto.commit = true 14:24:57 policy-apex-pdp | exclude.internal.topics = true 14:24:57 policy-apex-pdp | fetch.max.bytes = 52428800 14:24:57 policy-apex-pdp | fetch.max.wait.ms = 500 14:24:57 policy-apex-pdp | fetch.min.bytes = 1 14:24:57 policy-apex-pdp | group.id = 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce 14:24:57 policy-apex-pdp | group.instance.id = null 14:24:57 policy-apex-pdp | heartbeat.interval.ms = 3000 14:24:57 policy-apex-pdp | interceptor.classes = [] 14:24:57 policy-apex-pdp | internal.leave.group.on.close = true 14:24:57 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 policy-apex-pdp | isolation.level = read_uncommitted 14:24:57 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:24:57 policy-apex-pdp | max.poll.interval.ms = 300000 14:24:57 policy-apex-pdp | max.poll.records = 500 14:24:57 policy-apex-pdp | metadata.max.age.ms = 300000 14:24:57 policy-apex-pdp | metric.reporters = [] 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,436] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 14:24:57 kafka | [2024-04-25 14:22:22,440] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:24:57 kafka | [2024-04-25 14:22:22,445] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:22,447] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 14:24:57 kafka | [2024-04-25 14:22:22,452] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:22,458] INFO Socket connection established, initiating session, client: /172.17.0.8:53856, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:22,522] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000005a2800001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 14:24:57 kafka | [2024-04-25 14:22:22,527] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 14:24:57 kafka | [2024-04-25 14:22:23,481] INFO Cluster ID = lFyKLv7sTJO7XXtTZrPgZw (kafka.server.KafkaServer) 14:24:57 kafka | [2024-04-25 14:22:23,485] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 14:24:57 kafka | [2024-04-25 14:22:23,532] INFO KafkaConfig values: 14:24:57 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 14:24:57 kafka | alter.config.policy.class.name = null 14:24:57 kafka | alter.log.dirs.replication.quota.window.num = 11 14:24:57 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 14:24:57 kafka | authorizer.class.name = 14:24:57 kafka | auto.create.topics.enable = true 14:24:57 kafka | auto.include.jmx.reporter = true 14:24:57 kafka | auto.leader.rebalance.enable = true 14:24:57 kafka | background.threads = 10 14:24:57 kafka | broker.heartbeat.interval.ms = 2000 14:24:57 kafka | broker.id = 1 14:24:57 kafka | broker.id.generation.enable = true 14:24:57 kafka | broker.rack = null 14:24:57 kafka | broker.session.timeout.ms = 9000 14:24:57 kafka | client.quota.callback.class = null 14:24:57 kafka | compression.type = producer 14:24:57 kafka | connection.failed.authentication.delay.ms = 100 14:24:57 policy-pap | Waiting for mariadb port 3306... 14:24:57 policy-pap | mariadb (172.17.0.3:3306) open 14:24:57 policy-pap | Waiting for kafka port 9092... 14:24:57 policy-pap | kafka (172.17.0.8:9092) open 14:24:57 policy-pap | Waiting for api port 6969... 14:24:57 policy-pap | api (172.17.0.7:6969) open 14:24:57 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 14:24:57 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 14:24:57 policy-pap | 14:24:57 policy-pap | . ____ _ __ _ _ 14:24:57 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:24:57 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:24:57 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:24:57 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 14:24:57 policy-pap | =========|_|==============|___/=/_/_/_/ 14:24:57 policy-pap | :: Spring Boot :: (v3.1.10) 14:24:57 policy-pap | 14:24:57 policy-pap | [2024-04-25T14:22:49.260+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 14:24:57 policy-pap | [2024-04-25T14:22:49.312+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 41 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 14:24:57 policy-pap | [2024-04-25T14:22:49.313+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 14:24:57 policy-pap | [2024-04-25T14:22:51.175+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:24:57 policy-pap | [2024-04-25T14:22:51.271+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 86 ms. Found 7 JPA repository interfaces. 14:24:57 policy-pap | [2024-04-25T14:22:51.702+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:24:57 policy-pap | [2024-04-25T14:22:51.703+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:24:57 policy-pap | [2024-04-25T14:22:52.272+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:24:57 policy-pap | [2024-04-25T14:22:52.281+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:24:57 policy-pap | [2024-04-25T14:22:52.283+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:24:57 policy-pap | [2024-04-25T14:22:52.283+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 14:24:57 policy-pap | [2024-04-25T14:22:52.376+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 14:24:57 policy-pap | [2024-04-25T14:22:52.377+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3001 ms 14:24:57 policy-pap | [2024-04-25T14:22:52.775+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:24:57 policy-pap | [2024-04-25T14:22:52.827+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 14:24:57 policy-pap | [2024-04-25T14:22:53.212+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:24:57 policy-pap | [2024-04-25T14:22:53.310+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@fd9ebde 14:24:57 policy-pap | [2024-04-25T14:22:53.312+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:24:57 policy-pap | [2024-04-25T14:22:53.341+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 14:24:57 policy-pap | [2024-04-25T14:22:54.933+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 14:24:57 policy-pap | [2024-04-25T14:22:54.942+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:24:57 policy-pap | [2024-04-25T14:22:55.404+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 14:24:57 policy-pap | [2024-04-25T14:22:55.848+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 14:24:57 policy-pap | [2024-04-25T14:22:55.999+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 14:24:57 kafka | connections.max.idle.ms = 600000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.257237357Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 14:24:57 policy-apex-pdp | metrics.num.samples = 2 14:24:57 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 14:24:57 policy-pap | [2024-04-25T14:22:56.273+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 14:24:57 kafka | connections.max.reauth.ms = 0 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.260189496Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.951639ms 14:24:57 policy-apex-pdp | metrics.recording.level = INFO 14:24:57 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | allow.auto.create.topics = true 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 14:24:57 kafka | control.plane.listener.name = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.263831994Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 14:24:57 zookeeper | ===> User 14:24:57 policy-apex-pdp | metrics.sample.window.ms = 30000 14:24:57 simulator | overriding logback.xml 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-db-migrator | -------------- 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 14:24:57 kafka | controlled.shutdown.enable = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.266501199Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.668905ms 14:24:57 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:24:57 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 simulator | 2024-04-25 14:22:12,670 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 14:24:57 policy-pap | auto.commit.interval.ms = 5000 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 14:24:57 kafka | controlled.shutdown.max.retries = 3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.273637862Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 14:24:57 zookeeper | ===> Configuring ... 14:24:57 policy-apex-pdp | receive.buffer.bytes = 65536 14:24:57 simulator | 2024-04-25 14:22:12,729 INFO org.onap.policy.models.simulators starting 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 14:24:57 kafka | controlled.shutdown.retry.backoff.ms = 5000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.273753024Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=116.001µs 14:24:57 zookeeper | ===> Running preflight checks ... 14:24:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:24:57 simulator | 2024-04-25 14:22:12,730 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 14:24:57 policy-pap | auto.offset.reset = latest 14:24:57 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 14:24:57 prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.277443421Z level=info msg="Executing migration" id="create quota table v1" 14:24:57 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 14:24:57 policy-apex-pdp | reconnect.backoff.ms = 50 14:24:57 simulator | 2024-04-25 14:22:12,916 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | controller.listener.names = null 14:24:57 prometheus | ts=2024-04-25T14:22:14.324Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.278282163Z level=info msg="Migration successfully executed" id="create quota table v1" duration=838.502µs 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.283303549Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 14:24:57 policy-apex-pdp | request.timeout.ms = 30000 14:24:57 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 14:24:57 policy-pap | check.crcs = true 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:24:57 kafka | controller.quorum.append.linger.ms = 25 14:24:57 kafka | controller.quorum.election.backoff.max.ms = 1000 14:24:57 simulator | 2024-04-25 14:22:12,917 INFO org.onap.policy.models.simulators starting A&AI simulator 14:24:57 zookeeper | ===> Launching ... 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | controller.quorum.election.timeout.ms = 1000 14:24:57 kafka | controller.quorum.fetch.timeout.ms = 2000 14:24:57 policy-apex-pdp | retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.284113609Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=810.24µs 14:24:57 simulator | 2024-04-25 14:22:13,012 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:24:57 zookeeper | ===> Launching zookeeper ... 14:24:57 policy-pap | client.id = consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-1 14:24:57 policy-db-migrator | 14:24:57 kafka | controller.quorum.request.timeout.ms = 2000 14:24:57 kafka | controller.quorum.retry.backoff.ms = 20 14:24:57 policy-apex-pdp | sasl.client.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.29028375Z level=info msg="Executing migration" id="Update quota table charset" 14:24:57 simulator | 2024-04-25 14:22:13,023 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,671] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | client.rack = 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.325Z caller=main.go:1129 level=info msg="Starting TSDB ..." 14:24:57 kafka | controller.quorum.voters = [] 14:24:57 policy-apex-pdp | sasl.jaas.config = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.29031391Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=28.02µs 14:24:57 simulator | 2024-04-25 14:22:13,025 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,678] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 14:24:57 prometheus | ts=2024-04-25T14:22:14.327Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 14:24:57 kafka | controller.quota.window.num = 11 14:24:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.294423414Z level=info msg="Executing migration" id="create plugin_setting table" 14:24:57 simulator | 2024-04-25 14:22:13,029 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:24:57 zookeeper | [2024-04-25 14:22:16,678] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | default.api.timeout.ms = 60000 14:24:57 policy-db-migrator | -------------- 14:24:57 prometheus | ts=2024-04-25T14:22:14.327Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 14:24:57 kafka | controller.quota.window.size.seconds = 1 14:24:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.295029103Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=606.299µs 14:24:57 simulator | 2024-04-25 14:22:13,086 INFO Session workerName=node0 14:24:57 zookeeper | [2024-04-25 14:22:16,678] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | enable.auto.commit = true 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 14:24:57 kafka | controller.socket.timeout.ms = 30000 14:24:57 policy-apex-pdp | sasl.kerberos.service.name = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.299278998Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 14:24:57 simulator | 2024-04-25 14:22:13,599 INFO Using GSON for REST calls 14:24:57 zookeeper | [2024-04-25 14:22:16,678] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | exclude.internal.topics = true 14:24:57 policy-db-migrator | -------------- 14:24:57 prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.85µs 14:24:57 kafka | create.topic.policy.class.name = null 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.30013049Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=845.502µs 14:24:57 simulator | 2024-04-25 14:22:13,716 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 14:24:57 zookeeper | [2024-04-25 14:22:16,680] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 14:24:57 policy-pap | fetch.max.bytes = 52428800 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 14:24:57 kafka | default.replication.factor = 1 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.305042283Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 14:24:57 simulator | 2024-04-25 14:22:13,729 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 14:24:57 zookeeper | [2024-04-25 14:22:16,680] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 14:24:57 policy-pap | fetch.max.wait.ms = 500 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 14:24:57 kafka | delegation.token.expiry.check.interval.ms = 3600000 14:24:57 policy-apex-pdp | sasl.login.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.31016328Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.120507ms 14:24:57 simulator | 2024-04-25 14:22:13,737 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1551ms 14:24:57 zookeeper | [2024-04-25 14:22:16,680] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 14:24:57 policy-pap | fetch.min.bytes = 1 14:24:57 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 14:24:57 prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=21.55µs wal_replay_duration=381.195µs wbl_replay_duration=190ns total_replay_duration=423.826µs 14:24:57 kafka | delegation.token.expiry.time.ms = 86400000 14:24:57 policy-apex-pdp | sasl.login.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.313815648Z level=info msg="Executing migration" id="Update plugin_setting table charset" 14:24:57 simulator | 2024-04-25 14:22:13,737 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4288 ms. 14:24:57 zookeeper | [2024-04-25 14:22:16,680] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 14:24:57 policy-pap | group.id = b957469a-2969-4bff-8555-1bfe3e4d4da0 14:24:57 policy-db-migrator | -------------- 14:24:57 prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 14:24:57 kafka | delegation.token.master.key = null 14:24:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.313837638Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.97µs 14:24:57 simulator | 2024-04-25 14:22:13,746 INFO org.onap.policy.models.simulators starting SDNC simulator 14:24:57 zookeeper | [2024-04-25 14:22:16,681] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 14:24:57 policy-pap | group.instance.id = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1153 level=info msg="TSDB started" 14:24:57 kafka | delegation.token.max.lifetime.ms = 604800000 14:24:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.31695937Z level=info msg="Executing migration" id="create session table" 14:24:57 simulator | 2024-04-25 14:22:13,749 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | heartbeat.interval.ms = 3000 14:24:57 policy-db-migrator | -------------- 14:24:57 prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 14:24:57 kafka | delegation.token.secret.key = null 14:24:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.31776314Z level=info msg="Migration successfully executed" id="create session table" duration=803.55µs 14:24:57 simulator | 2024-04-25 14:22:13,750 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.334Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=738.919µs db_storage=1.18µs remote_storage=1.62µs web_handler=250ns query_engine=610ns scrape=208.233µs scrape_sd=114.681µs notify=20.15µs notify_sd=6.62µs rules=1.18µs tracing=4.43µs 14:24:57 kafka | delete.records.purgatory.purge.interval.requests = 1 14:24:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.324039382Z level=info msg="Executing migration" id="Drop old table playlist table" 14:24:57 simulator | 2024-04-25 14:22:13,759 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | internal.leave.group.on.close = true 14:24:57 policy-db-migrator | 14:24:57 prometheus | ts=2024-04-25T14:22:14.334Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 14:24:57 kafka | delete.topic.enable = true 14:24:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.324169694Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=129.222µs 14:24:57 simulator | 2024-04-25 14:22:13,760 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 14:24:57 prometheus | ts=2024-04-25T14:22:14.334Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 14:24:57 kafka | early.start.listeners = null 14:24:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.327633649Z level=info msg="Executing migration" id="Drop old table playlist_item table" 14:24:57 simulator | 2024-04-25 14:22:13,769 INFO Session workerName=node0 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:24:57 policy-pap | isolation.level = read_uncommitted 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | fetch.max.bytes = 57671680 14:24:57 kafka | fetch.purgatory.purge.interval.requests = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.327763781Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=130.832µs 14:24:57 simulator | 2024-04-25 14:22:13,851 INFO Using GSON for REST calls 14:24:57 zookeeper | [2024-04-25 14:22:16,682] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 14:24:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:24:57 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.331270297Z level=info msg="Executing migration" id="create playlist table v2" 14:24:57 simulator | 2024-04-25 14:22:13,864 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 14:24:57 zookeeper | [2024-04-25 14:22:16,693] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 14:24:57 policy-pap | max.partition.fetch.bytes = 1048576 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:24:57 kafka | group.consumer.heartbeat.interval.ms = 5000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.332582794Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.316117ms 14:24:57 simulator | 2024-04-25 14:22:13,865 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 14:24:57 zookeeper | [2024-04-25 14:22:16,695] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:24:57 policy-pap | max.poll.interval.ms = 300000 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.mechanism = GSSAPI 14:24:57 kafka | group.consumer.max.heartbeat.interval.ms = 15000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.481236071Z level=info msg="Executing migration" id="create playlist item table v2" 14:24:57 simulator | 2024-04-25 14:22:13,866 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1680ms 14:24:57 zookeeper | [2024-04-25 14:22:16,695] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:24:57 policy-pap | max.poll.records = 500 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 kafka | group.consumer.max.session.timeout.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.4826573Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.421689ms 14:24:57 simulator | 2024-04-25 14:22:13,866 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4893 ms. 14:24:57 zookeeper | [2024-04-25 14:22:16,697] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:24:57 kafka | group.consumer.max.size = 2147483647 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.494011439Z level=info msg="Executing migration" id="Update playlist table charset" 14:24:57 simulator | 2024-04-25 14:22:13,867 INFO org.onap.policy.models.simulators starting SO simulator 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | metric.reporters = [] 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:24:57 kafka | group.consumer.min.heartbeat.interval.ms = 5000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.494050839Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=40.73µs 14:24:57 simulator | 2024-04-25 14:22:13,878 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 kafka | group.consumer.min.session.timeout.ms = 45000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.498897192Z level=info msg="Executing migration" id="Update playlist_item table charset" 14:24:57 simulator | 2024-04-25 14:22:13,879 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 kafka | group.consumer.session.timeout.ms = 45000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.498936633Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=40.911µs 14:24:57 simulator | 2024-04-25 14:22:13,880 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 kafka | group.coordinator.new.enable = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.503506693Z level=info msg="Executing migration" id="Add playlist column created_at" 14:24:57 simulator | 2024-04-25 14:22:13,880 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | group.coordinator.threads = 1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.508535049Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.029166ms 14:24:57 simulator | 2024-04-25 14:22:13,921 INFO Session workerName=node0 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | receive.buffer.bytes = 65536 14:24:57 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 14:24:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:24:57 kafka | group.initial.rebalance.delay.ms = 3000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.513402123Z level=info msg="Executing migration" id="Add playlist column updated_at" 14:24:57 simulator | 2024-04-25 14:22:13,990 INFO Using GSON for REST calls 14:24:57 zookeeper | [2024-04-25 14:22:16,706] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:24:57 kafka | group.max.session.timeout.ms = 1800000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.516756587Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.354044ms 14:24:57 simulator | 2024-04-25 14:22:14,002 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 14:24:57 zookeeper | [2024-04-25 14:22:16,707] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:24:57 kafka | group.max.size = 2147483647 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.527402516Z level=info msg="Executing migration" id="drop preferences table v2" 14:24:57 simulator | 2024-04-25 14:22:14,003 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 14:24:57 zookeeper | [2024-04-25 14:22:16,707] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | security.protocol = PLAINTEXT 14:24:57 kafka | group.min.session.timeout.ms = 6000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.52772091Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=317.574µs 14:24:57 simulator | 2024-04-25 14:22:14,004 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1818ms 14:24:57 zookeeper | [2024-04-25 14:22:16,707] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | security.providers = null 14:24:57 kafka | initial.broker.registration.timeout.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.535233728Z level=info msg="Executing migration" id="drop preferences table v3" 14:24:57 simulator | 2024-04-25 14:22:14,004 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4875 ms. 14:24:57 zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | send.buffer.bytes = 131072 14:24:57 kafka | inter.broker.listener.name = PLAINTEXT 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.535519823Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=280.765µs 14:24:57 simulator | 2024-04-25 14:22:14,018 INFO org.onap.policy.models.simulators starting VFC simulator 14:24:57 zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:host.name=db21b226f583 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 14:24:57 policy-apex-pdp | session.timeout.ms = 45000 14:24:57 kafka | inter.broker.protocol.version = 3.6-IV2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.539561175Z level=info msg="Executing migration" id="create preferences table v3" 14:24:57 simulator | 2024-04-25 14:22:14,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:24:57 zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:24:57 kafka | kafka.metrics.polling.interval.secs = 10 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.540959883Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.399818ms 14:24:57 simulator | 2024-04-25 14:22:14,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:24:57 kafka | kafka.metrics.reporters = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.545566494Z level=info msg="Executing migration" id="Update preferences table charset" 14:24:57 simulator | 2024-04-25 14:22:14,021 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | ssl.cipher.suites = null 14:24:57 kafka | leader.imbalance.check.interval.seconds = 300 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.545594824Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=30.23µs 14:24:57 simulator | 2024-04-25 14:22:14,022 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 kafka | leader.imbalance.per.broker.percentage = 10 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.550340897Z level=info msg="Executing migration" id="Add column team_id in preferences" 14:24:57 simulator | 2024-04-25 14:22:14,025 INFO Session workerName=node0 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:24:57 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.55517615Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.832473ms 14:24:57 simulator | 2024-04-25 14:22:14,089 INFO Using GSON for REST calls 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.engine.factory.class = null 14:24:57 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 14:24:57 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.55898167Z level=info msg="Executing migration" id="Update team_id column values in preferences" 14:24:57 simulator | 2024-04-25 14:22:14,097 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.key.password = null 14:24:57 kafka | log.cleaner.backoff.ms = 15000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.559316064Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=335.284µs 14:24:57 simulator | 2024-04-25 14:22:14,098 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:24:57 kafka | log.cleaner.dedupe.buffer.size = 134217728 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.562798209Z level=info msg="Executing migration" id="Add column week_start in preferences" 14:24:57 simulator | 2024-04-25 14:22:14,098 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1912ms 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:24:57 kafka | log.cleaner.delete.retention.ms = 86400000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.566088482Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.289573ms 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 simulator | 2024-04-25 14:22:14,098 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 14:24:57 policy-apex-pdp | ssl.keystore.key = null 14:24:57 kafka | log.cleaner.enable = true 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.57500286Z level=info msg="Executing migration" id="Add column preferences.json_data" 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 simulator | 2024-04-25 14:22:14,099 INFO org.onap.policy.models.simulators started 14:24:57 policy-apex-pdp | ssl.keystore.location = null 14:24:57 kafka | log.cleaner.io.buffer.load.factor = 0.9 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.579579369Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.574199ms 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.keystore.password = null 14:24:57 kafka | log.cleaner.io.buffer.size = 524288 14:24:57 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.585067281Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.keystore.type = JKS 14:24:57 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.585135402Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.641µs 14:24:57 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.protocol = TLSv1.3 14:24:57 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.589558111Z level=info msg="Executing migration" id="Add preferences index org_id" 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.provider = null 14:24:57 kafka | log.cleaner.min.cleanable.ratio = 0.5 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.590591824Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.033784ms 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.secure.random.implementation = null 14:24:57 kafka | log.cleaner.min.compaction.lag.ms = 0 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.597001538Z level=info msg="Executing migration" id="Add preferences index user_id" 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:24:57 kafka | log.cleaner.threads = 1 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.598679399Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.676751ms 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.truststore.certificates = null 14:24:57 kafka | log.cleanup.policy = [delete] 14:24:57 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.603380591Z level=info msg="Executing migration" id="create alert table v1" 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.truststore.location = null 14:24:57 kafka | log.dir = /tmp/kafka-logs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.60482428Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.441559ms 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.truststore.password = null 14:24:57 kafka | log.dirs = /var/lib/kafka/data 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.609332189Z level=info msg="Executing migration" id="add index alert org_id & id " 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | ssl.truststore.type = JKS 14:24:57 kafka | log.flush.interval.messages = 9223372036854775807 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.610481474Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.146995ms 14:24:57 zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 kafka | log.flush.interval.ms = null 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.618351277Z level=info msg="Executing migration" id="add index alert state" 14:24:57 zookeeper | [2024-04-25 14:22:16,711] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | 14:24:57 kafka | log.flush.offset.checkpoint.interval.ms = 60000 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.62007403Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.722653ms 14:24:57 zookeeper | [2024-04-25 14:22:16,711] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.265+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 14:24:57 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.625952457Z level=info msg="Executing migration" id="add index alert dashboard_id" 14:24:57 zookeeper | [2024-04-25 14:22:16,712] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.266+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.626738817Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=786.44µs 14:24:57 zookeeper | [2024-04-25 14:22:16,713] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.266+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979264 14:24:57 kafka | log.index.interval.bytes = 4096 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.630682219Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 14:24:57 zookeeper | [2024-04-25 14:22:16,714] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.268+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-1, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Subscribed to topic(s): policy-pdp-pap 14:24:57 kafka | log.index.size.max.bytes = 10485760 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.631330788Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=648.599µs 14:24:57 zookeeper | [2024-04-25 14:22:16,714] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.279+00:00|INFO|ServiceManager|main] service manager starting 14:24:57 kafka | log.local.retention.bytes = -2 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.63688785Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 14:24:57 zookeeper | [2024-04-25 14:22:16,714] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.280+00:00|INFO|ServiceManager|main] service manager starting topics 14:24:57 kafka | log.local.retention.ms = -2 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.638559052Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.666511ms 14:24:57 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.281+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 14:24:57 kafka | log.message.downconversion.enable = true 14:24:57 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.643226923Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 14:24:57 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.301+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 kafka | log.message.format.version = 3.0-IV1 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.644454909Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.229376ms 14:24:57 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 policy-apex-pdp | allow.auto.create.topics = true 14:24:57 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.648495992Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 14:24:57 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 policy-apex-pdp | auto.commit.interval.ms = 5000 14:24:57 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.661499443Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.002711ms 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:24:57 policy-apex-pdp | auto.include.jmx.reporter = true 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.667069626Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 14:24:57 zookeeper | [2024-04-25 14:22:16,717] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | auto.offset.reset = latest 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.667799675Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=729.529µs 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 kafka | log.message.timestamp.type = CreateTime 14:24:57 zookeeper | [2024-04-25 14:22:16,717] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:24:57 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.671041147Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 14:24:57 kafka | log.preallocate = false 14:24:57 zookeeper | [2024-04-25 14:22:16,718] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 14:24:57 policy-apex-pdp | check.crcs = true 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.67199321Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=951.753µs 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 kafka | log.retention.bytes = -1 14:24:57 zookeeper | [2024-04-25 14:22:16,718] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 14:24:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.676159275Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 kafka | log.retention.check.interval.ms = 300000 14:24:57 zookeeper | [2024-04-25 14:22:16,718] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-apex-pdp | client.id = consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.676669361Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=509.276µs 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 kafka | log.retention.hours = 168 14:24:57 zookeeper | [2024-04-25 14:22:16,736] INFO Logging initialized @471ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 14:24:57 policy-apex-pdp | client.rack = 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.683224307Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 14:24:57 policy-pap | security.providers = null 14:24:57 kafka | log.retention.minutes = null 14:24:57 zookeeper | [2024-04-25 14:22:16,821] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | connections.max.idle.ms = 540000 14:24:57 policy-apex-pdp | default.api.timeout.ms = 60000 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 kafka | log.retention.ms = null 14:24:57 zookeeper | [2024-04-25 14:22:16,822] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 14:24:57 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 14:24:57 policy-apex-pdp | enable.auto.commit = true 14:24:57 policy-apex-pdp | exclude.internal.topics = true 14:24:57 policy-pap | session.timeout.ms = 45000 14:24:57 kafka | log.roll.hours = 168 14:24:57 zookeeper | [2024-04-25 14:22:16,839] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | fetch.max.bytes = 52428800 14:24:57 policy-apex-pdp | fetch.max.wait.ms = 500 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 kafka | log.roll.jitter.hours = 0 14:24:57 zookeeper | [2024-04-25 14:22:16,869] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | fetch.min.bytes = 1 14:24:57 policy-apex-pdp | group.id = 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 kafka | log.roll.jitter.ms = null 14:24:57 zookeeper | [2024-04-25 14:22:16,869] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | group.instance.id = null 14:24:57 policy-apex-pdp | heartbeat.interval.ms = 3000 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 kafka | log.roll.ms = null 14:24:57 zookeeper | [2024-04-25 14:22:16,870] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | interceptor.classes = [] 14:24:57 policy-apex-pdp | internal.leave.group.on.close = true 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 kafka | log.segment.bytes = 1073741824 14:24:57 zookeeper | [2024-04-25 14:22:16,872] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 policy-apex-pdp | isolation.level = read_uncommitted 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 kafka | log.segment.delete.delay.ms = 60000 14:24:57 zookeeper | [2024-04-25 14:22:16,879] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 14:24:57 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 14:24:57 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 kafka | max.connection.creation.rate = 2147483647 14:24:57 zookeeper | [2024-04-25 14:22:16,890] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | max.poll.interval.ms = 300000 14:24:57 policy-apex-pdp | max.poll.records = 500 14:24:57 policy-pap | ssl.key.password = null 14:24:57 kafka | max.connections = 2147483647 14:24:57 zookeeper | [2024-04-25 14:22:16,890] INFO Started @626ms (org.eclipse.jetty.server.Server) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | metadata.max.age.ms = 300000 14:24:57 policy-apex-pdp | metric.reporters = [] 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 kafka | max.connections.per.ip = 2147483647 14:24:57 zookeeper | [2024-04-25 14:22:16,890] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | metrics.num.samples = 2 14:24:57 policy-apex-pdp | metrics.recording.level = INFO 14:24:57 kafka | max.connections.per.ip.overrides = 14:24:57 zookeeper | [2024-04-25 14:22:16,895] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | metrics.sample.window.ms = 30000 14:24:57 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 kafka | max.incremental.fetch.session.cache.slots = 1000 14:24:57 zookeeper | [2024-04-25 14:22:16,896] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | receive.buffer.bytes = 65536 14:24:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 kafka | message.max.bytes = 1048588 14:24:57 zookeeper | [2024-04-25 14:22:16,898] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:24:57 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 14:24:57 policy-apex-pdp | reconnect.backoff.ms = 50 14:24:57 policy-apex-pdp | request.timeout.ms = 30000 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 kafka | metadata.log.dir = null 14:24:57 zookeeper | [2024-04-25 14:22:16,899] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | retry.backoff.ms = 100 14:24:57 policy-apex-pdp | sasl.client.callback.handler.class = null 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 14:24:57 zookeeper | [2024-04-25 14:22:16,915] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | sasl.jaas.config = null 14:24:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 kafka | metadata.log.max.snapshot.interval.ms = 3600000 14:24:57 zookeeper | [2024-04-25 14:22:16,916] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.683923197Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=698.28µs 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 kafka | metadata.log.segment.bytes = 1073741824 14:24:57 zookeeper | [2024-04-25 14:22:16,917] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.687762857Z level=info msg="Executing migration" id="create alert_notification table v1" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.688602027Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=838.76µs 14:24:57 policy-apex-pdp | sasl.kerberos.service.name = null 14:24:57 kafka | metadata.log.segment.min.bytes = 8388608 14:24:57 zookeeper | [2024-04-25 14:22:16,917] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.provider = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.695501028Z level=info msg="Executing migration" id="Add column is_default" 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 kafka | metadata.log.segment.ms = 604800000 14:24:57 zookeeper | [2024-04-25 14:22:16,961] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 14:24:57 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.699526061Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.025703ms 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 kafka | metadata.max.idle.interval.ms = 500 14:24:57 zookeeper | [2024-04-25 14:22:16,961] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.705370537Z level=info msg="Executing migration" id="Add column frequency" 14:24:57 policy-apex-pdp | sasl.login.callback.handler.class = null 14:24:57 kafka | metadata.max.retention.bytes = 104857600 14:24:57 zookeeper | [2024-04-25 14:22:16,966] INFO Snapshot loaded in 48 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.708937403Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.566316ms 14:24:57 policy-apex-pdp | sasl.login.class = null 14:24:57 kafka | metadata.max.retention.ms = 604800000 14:24:57 zookeeper | [2024-04-25 14:22:16,967] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.712492951Z level=info msg="Executing migration" id="Add column send_reminder" 14:24:57 kafka | metric.reporters = [] 14:24:57 zookeeper | [2024-04-25 14:22:16,967] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.716141298Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.647627ms 14:24:57 kafka | metrics.num.samples = 2 14:24:57 zookeeper | [2024-04-25 14:22:16,984] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.719739065Z level=info msg="Executing migration" id="Add column disable_resolve_message" 14:24:57 zookeeper | [2024-04-25 14:22:16,985] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 14:24:57 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 14:24:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:24:57 kafka | metrics.recording.level = INFO 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.723324393Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.582318ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:24:57 kafka | metrics.sample.window.ms = 30000 14:24:57 zookeeper | [2024-04-25 14:22:17,002] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.728004504Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 14:24:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:24:57 kafka | min.insync.replicas = 1 14:24:57 zookeeper | [2024-04-25 14:22:17,003] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.728965107Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=960.044µs 14:24:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:24:57 kafka | node.id = 1 14:24:57 zookeeper | [2024-04-25 14:22:21,025] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:24:57 kafka | num.io.threads = 8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.735873337Z level=info msg="Executing migration" id="Update alert table charset" 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | 14:24:57 kafka | num.network.threads = 3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.735915867Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=41.35µs 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 kafka | num.partitions = 1 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.742858679Z level=info msg="Executing migration" id="Update alert_notification table charset" 14:24:57 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 14:24:57 policy-pap | 14:24:57 kafka | num.recovery.threads.per.data.dir = 1 14:24:57 policy-apex-pdp | sasl.mechanism = GSSAPI 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.742899459Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=42.391µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 kafka | num.replica.alter.log.dirs.threads = null 14:24:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.748567493Z level=info msg="Executing migration" id="create notification_journal table v1" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 kafka | num.replica.fetchers = 1 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.74986554Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.297327ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054976437 14:24:57 kafka | offset.metadata.max.bytes = 4096 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.756141012Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:56.441+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-1, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Subscribed to topic(s): policy-pdp-pap 14:24:57 kafka | offsets.commit.required.acks = -1 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.757737423Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.596481ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:56.442+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 kafka | offsets.commit.timeout.ms = 5000 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.762852641Z level=info msg="Executing migration" id="drop alert_notification_journal" 14:24:57 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 14:24:57 policy-pap | allow.auto.create.topics = true 14:24:57 kafka | offsets.load.buffer.size = 5242880 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | auto.commit.interval.ms = 5000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.763799003Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=945.762µs 14:24:57 kafka | offsets.retention.check.interval.ms = 600000 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.767284308Z level=info msg="Executing migration" id="create alert_notification_state table v1" 14:24:57 kafka | offsets.retention.minutes = 10080 14:24:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.768331862Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.047574ms 14:24:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:24:57 policy-db-migrator | 14:24:57 kafka | offsets.topic.compression.codec = 0 14:24:57 kafka | offsets.topic.num.partitions = 50 14:24:57 kafka | offsets.topic.replication.factor = 1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.776418238Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 14:24:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:24:57 policy-pap | auto.offset.reset = latest 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.778028599Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.608981ms 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | security.protocol = PLAINTEXT 14:24:57 kafka | offsets.topic.segment.bytes = 104857600 14:24:57 policy-pap | check.crcs = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.78428747Z level=info msg="Executing migration" id="Add for to alert table" 14:24:57 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 14:24:57 policy-apex-pdp | security.providers = null 14:24:57 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.788113331Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.825451ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | send.buffer.bytes = 131072 14:24:57 kafka | password.encoder.iterations = 4096 14:24:57 policy-pap | client.id = consumer-policy-pap-2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.792571779Z level=info msg="Executing migration" id="Add column uid in alert_notification" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | session.timeout.ms = 45000 14:24:57 kafka | password.encoder.key.length = 128 14:24:57 policy-pap | client.rack = 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.796324229Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.7503ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:24:57 kafka | password.encoder.keyfactory.algorithm = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.801949193Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:24:57 kafka | password.encoder.old.secret = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.802321017Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=371.654µs 14:24:57 kafka | password.encoder.secret = null 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.cipher.suites = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.80636075Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 14:24:57 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 14:24:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.807997282Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.636272ms 14:24:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:24:57 kafka | process.roles = [] 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.815281916Z level=info msg="Executing migration" id="Remove unique index org_id_name" 14:24:57 kafka | producer.id.expiration.check.interval.ms = 600000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | ssl.engine.factory.class = null 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.817154871Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.872405ms 14:24:57 kafka | producer.id.expiration.ms = 86400000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | ssl.key.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.824015812Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:24:57 kafka | producer.purgatory.purge.interval.requests = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.827815901Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.800319ms 14:24:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:24:57 kafka | queued.max.request.bytes = -1 14:24:57 policy-db-migrator | 14:24:57 policy-pap | default.api.timeout.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.910022778Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 14:24:57 policy-apex-pdp | ssl.keystore.key = null 14:24:57 kafka | queued.max.requests = 500 14:24:57 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 14:24:57 policy-pap | enable.auto.commit = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.910102929Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=81.591µs 14:24:57 policy-apex-pdp | ssl.keystore.location = null 14:24:57 kafka | quota.window.num = 11 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.918493569Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 14:24:57 kafka | quota.window.size.seconds = 1 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 14:24:57 policy-apex-pdp | ssl.keystore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.919876857Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.397449ms 14:24:57 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | ssl.keystore.type = JKS 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.927621939Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 14:24:57 kafka | remote.log.manager.task.interval.ms = 30000 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.protocol = TLSv1.3 14:24:57 policy-pap | exclude.internal.topics = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.928588971Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=962.622µs 14:24:57 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.provider = null 14:24:57 policy-pap | fetch.max.bytes = 52428800 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.936666347Z level=info msg="Executing migration" id="Drop old annotation table v4" 14:24:57 kafka | remote.log.manager.task.retry.backoff.ms = 500 14:24:57 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 14:24:57 policy-apex-pdp | ssl.secure.random.implementation = null 14:24:57 policy-pap | fetch.max.wait.ms = 500 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.937011622Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=345.155µs 14:24:57 kafka | remote.log.manager.task.retry.jitter = 0.2 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-pap | fetch.min.bytes = 1 14:24:57 policy-pap | group.id = policy-pap 14:24:57 kafka | remote.log.manager.thread.pool.size = 10 14:24:57 policy-pap | group.instance.id = null 14:24:57 policy-apex-pdp | ssl.truststore.certificates = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.942014697Z level=info msg="Executing migration" id="create annotation table v5" 14:24:57 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 14:24:57 policy-apex-pdp | ssl.truststore.location = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.943085061Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.069824ms 14:24:57 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 14:24:57 policy-apex-pdp | ssl.truststore.password = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.949769408Z level=info msg="Executing migration" id="add index annotation 0 v3" 14:24:57 kafka | remote.log.metadata.manager.class.path = null 14:24:57 policy-pap | heartbeat.interval.ms = 3000 14:24:57 policy-apex-pdp | ssl.truststore.type = JKS 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.951226928Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.45996ms 14:24:57 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.957407888Z level=info msg="Executing migration" id="add index annotation 1 v3" 14:24:57 kafka | remote.log.metadata.manager.listener.name = null 14:24:57 policy-pap | internal.leave.group.on.close = true 14:24:57 policy-apex-pdp | 14:24:57 policy-db-migrator | > upgrade 0450-pdpgroup.sql 14:24:57 kafka | remote.log.reader.max.pending.tasks = 100 14:24:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.95904122Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.630602ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | remote.log.reader.threads = 10 14:24:57 policy-pap | isolation.level = read_uncommitted 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.965890999Z level=info msg="Executing migration" id="add index annotation 2 v3" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 14:24:57 kafka | remote.log.storage.manager.class.name = null 14:24:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979310 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.971381021Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=5.486192ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | max.partition.fetch.bytes = 1048576 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Subscribed to topic(s): policy-pdp-pap 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.982072492Z level=info msg="Executing migration" id="add index annotation 3 v3" 14:24:57 kafka | remote.log.storage.manager.class.path = null 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.interval.ms = 300000 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.313+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a70cbd7d-fac1-4b6c-9376-616c76b1b351, alive=false, publisher=null]]: starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.983183636Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.114764ms 14:24:57 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.records = 500 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.326+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.98806602Z level=info msg="Executing migration" id="add index annotation 4 v3" 14:24:57 kafka | remote.log.storage.system.enable = false 14:24:57 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 policy-apex-pdp | acks = -1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.989727122Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.660242ms 14:24:57 kafka | replica.fetch.backoff.ms = 1000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metric.reporters = [] 14:24:57 policy-apex-pdp | auto.include.jmx.reporter = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.996005764Z level=info msg="Executing migration" id="Update annotation table charset" 14:24:57 kafka | replica.fetch.max.bytes = 1048576 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 policy-apex-pdp | batch.size = 16384 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.996035434Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=38.1µs 14:24:57 kafka | replica.fetch.min.bytes = 1 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:17.999256317Z level=info msg="Executing migration" id="Add column region_id to annotation table" 14:24:57 kafka | replica.fetch.response.max.bytes = 10485760 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.0032603Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.003723ms 14:24:57 kafka | replica.fetch.wait.max.ms = 500 14:24:57 policy-apex-pdp | buffer.memory = 33554432 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.009804197Z level=info msg="Executing migration" id="Drop category_id index" 14:24:57 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:24:57 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.011107844Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.304987ms 14:24:57 policy-apex-pdp | client.id = producer-1 14:24:57 policy-db-migrator | > upgrade 0470-pdp.sql 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 kafka | replica.lag.time.max.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.015878337Z level=info msg="Executing migration" id="Add column tags to annotation table" 14:24:57 policy-apex-pdp | compression.type = none 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 kafka | replica.selector.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.021999678Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.120991ms 14:24:57 policy-apex-pdp | connections.max.idle.ms = 540000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 kafka | replica.socket.receive.buffer.bytes = 65536 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.028651537Z level=info msg="Executing migration" id="Create annotation_tag table v2" 14:24:57 policy-apex-pdp | delivery.timeout.ms = 120000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | receive.buffer.bytes = 65536 14:24:57 kafka | replica.socket.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.029716251Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.064003ms 14:24:57 policy-apex-pdp | enable.idempotence = true 14:24:57 policy-db-migrator | 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 kafka | replication.quota.window.num = 11 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.032759831Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 14:24:57 policy-apex-pdp | interceptor.classes = [] 14:24:57 policy-db-migrator | 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 kafka | replication.quota.window.size.seconds = 1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.033625762Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=866.121µs 14:24:57 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 kafka | request.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.168294207Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 14:24:57 policy-apex-pdp | linger.ms = 0 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 kafka | reserved.broker.max.id = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.169658016Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.367678ms 14:24:57 policy-apex-pdp | max.block.ms = 60000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 kafka | sasl.client.callback.handler.class = null 14:24:57 policy-apex-pdp | max.in.flight.requests.per.connection = 5 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.256572738Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | sasl.enabled.mechanisms = [GSSAPI] 14:24:57 policy-apex-pdp | max.request.size = 1048576 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.271208362Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.637284ms 14:24:57 policy-db-migrator | 14:24:57 kafka | sasl.jaas.config = null 14:24:57 policy-apex-pdp | metadata.max.age.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.274491055Z level=info msg="Executing migration" id="Create annotation_tag table v3" 14:24:57 policy-db-migrator | 14:24:57 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 policy-apex-pdp | metadata.max.idle.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.274980792Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=489.667µs 14:24:57 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 14:24:57 kafka | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 policy-apex-pdp | metric.reporters = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.326777679Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 14:24:57 policy-apex-pdp | metrics.num.samples = 2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.327382636Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=606.477µs 14:24:57 kafka | sasl.kerberos.service.name = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-apex-pdp | metrics.recording.level = INFO 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.33600386Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 14:24:57 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | metrics.sample.window.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.336455046Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=451.866µs 14:24:57 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.405059806Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 14:24:57 kafka | sasl.login.callback.handler.class = null 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | partitioner.availability.timeout.ms = 0 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.405545903Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=489.227µs 14:24:57 kafka | sasl.login.class = null 14:24:57 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 14:24:57 policy-apex-pdp | partitioner.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.475039614Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 14:24:57 kafka | sasl.login.connect.timeout.ms = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | partitioner.ignore.keys = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.47546397Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=426.756µs 14:24:57 kafka | sasl.login.read.timeout.ms = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-apex-pdp | receive.buffer.bytes = 32768 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.6459479Z level=info msg="Executing migration" id="Add created time to annotation table" 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 kafka | sasl.login.refresh.buffer.seconds = 300 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.652470306Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.526376ms 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 kafka | sasl.login.refresh.min.period.seconds = 60 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | reconnect.backoff.ms = 50 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.666099137Z level=info msg="Executing migration" id="Add updated time to annotation table" 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 kafka | sasl.login.refresh.window.factor = 0.8 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | request.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.673197392Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=7.097285ms 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 kafka | sasl.login.refresh.window.jitter = 0.05 14:24:57 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 14:24:57 policy-apex-pdp | retries = 2147483647 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.937590497Z level=info msg="Executing migration" id="Add index for created in annotation table" 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 kafka | sasl.login.retry.backoff.max.ms = 10000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:18.939515612Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.924375ms 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 kafka | sasl.login.retry.backoff.ms = 100 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 14:24:57 policy-apex-pdp | sasl.client.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.033146107Z level=info msg="Executing migration" id="Add index for updated in annotation table" 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 kafka | sasl.mechanism.controller.protocol = GSSAPI 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.jaas.config = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.034433745Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.290078ms 14:24:57 policy-pap | sasl.login.class = null 14:24:57 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.129061077Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 kafka | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.129372612Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=313.095µs 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 kafka | sasl.oauthbearer.expected.audience = null 14:24:57 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 14:24:57 policy-apex-pdp | sasl.kerberos.service.name = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.165349412Z level=info msg="Executing migration" id="Add epoch_end column" 14:24:57 kafka | sasl.oauthbearer.expected.issuer = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.168540004Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.192942ms 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 14:24:57 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.275608805Z level=info msg="Executing migration" id="Add index for epoch_end" 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.276831791Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.225196ms 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.login.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.310283147Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 kafka | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.310427509Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=144.932µs 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 kafka | sasl.oauthbearer.scope.claim.name = scope 14:24:57 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 14:24:57 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.314483504Z level=info msg="Executing migration" id="Move region to single row" 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 kafka | sasl.oauthbearer.sub.claim.name = sub 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.315047921Z level=info msg="Migration successfully executed" id="Move region to single row" duration=564.797µs 14:24:57 kafka | sasl.oauthbearer.token.endpoint.url = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.343227917Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 14:24:57 kafka | sasl.server.callback.handler.class = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.344583075Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.355788ms 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 kafka | sasl.server.max.receive.size = 524288 14:24:57 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 kafka | security.inter.broker.protocol = PLAINTEXT 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.351473798Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 kafka | security.providers = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.352095336Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=621.868µs 14:24:57 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 kafka | server.max.startup.time.ms = 9223372036854775807 14:24:57 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.501669813Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 14:24:57 policy-apex-pdp | sasl.mechanism = GSSAPI 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 kafka | socket.connection.setup.timeout.max.ms = 30000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.503240314Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.571901ms 14:24:57 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 kafka | socket.connection.setup.timeout.ms = 10000 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.58828865Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 kafka | socket.listen.backlog.size = 50 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.58978241Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.49404ms 14:24:57 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 kafka | socket.receive.buffer.bytes = 102400 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.754128605Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | socket.request.max.bytes = 104857600 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.755666115Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.53671ms 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 kafka | socket.send.buffer.bytes = 102400 14:24:57 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.945855405Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 kafka | ssl.cipher.suites = [] 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:19.946672885Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=819.38µs 14:24:57 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | ssl.client.auth = none 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.294906396Z level=info msg="Executing migration" id="Increase tags column to length 4096" 14:24:57 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.295115049Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=212.433µs 14:24:57 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 kafka | ssl.endpoint.identification.algorithm = https 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.383367017Z level=info msg="Executing migration" id="create test_data table" 14:24:57 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 kafka | ssl.engine.factory.class = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.384966529Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.600632ms 14:24:57 kafka | ssl.key.password = null 14:24:57 policy-apex-pdp | security.protocol = PLAINTEXT 14:24:57 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.402019856Z level=info msg="Executing migration" id="create dashboard_version table v1" 14:24:57 policy-pap | security.providers = null 14:24:57 kafka | ssl.keymanager.algorithm = SunX509 14:24:57 policy-apex-pdp | security.providers = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.403349624Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.329178ms 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 kafka | ssl.keystore.certificate.chain = null 14:24:57 policy-apex-pdp | send.buffer.bytes = 131072 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.411905728Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 14:24:57 policy-pap | session.timeout.ms = 45000 14:24:57 kafka | ssl.keystore.key = null 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.413388368Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.47934ms 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 kafka | ssl.keystore.location = null 14:24:57 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:24:57 policy-db-migrator | 14:24:57 kafka | ssl.keystore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.420397441Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 14:24:57 policy-apex-pdp | ssl.cipher.suites = null 14:24:57 policy-db-migrator | 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 kafka | ssl.keystore.type = JKS 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.421879651Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.48422ms 14:24:57 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 policy-db-migrator | > upgrade 0570-toscadatatype.sql 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 kafka | ssl.principal.mapping.rules = DEFAULT 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.514334686Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 14:24:57 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 kafka | ssl.protocol = TLSv1.3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.514846312Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=507.006µs 14:24:57 policy-apex-pdp | ssl.engine.factory.class = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 kafka | ssl.provider = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.616054114Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 14:24:57 policy-apex-pdp | ssl.key.password = null 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | ssl.secure.random.implementation = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.616875986Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=825.342µs 14:24:57 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.934975852Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 14:24:57 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:24:57 kafka | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | ssl.keystore.key = null 14:24:57 kafka | ssl.truststore.certificates = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.935145614Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=172.742µs 14:24:57 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 14:24:57 kafka | ssl.truststore.location = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.986205936Z level=info msg="Executing migration" id="create team table" 14:24:57 kafka | ssl.truststore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:20.987080469Z level=info msg="Migration successfully executed" id="create team table" duration=877.123µs 14:24:57 policy-apex-pdp | ssl.keystore.location = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 kafka | ssl.truststore.type = JKS 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.14848923Z level=info msg="Executing migration" id="add index team.org_id" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 14:24:57 policy-pap | ssl.key.password = null 14:24:57 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 14:24:57 policy-apex-pdp | ssl.keystore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.150365946Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.879926ms 14:24:57 kafka | transaction.max.timeout.ms = 900000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.374630527Z level=info msg="Executing migration" id="add unique index team_org_id_name" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | ssl.keystore.type = JKS 14:24:57 kafka | transaction.partition.verification.enable = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.376715705Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.088588ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 policy-apex-pdp | ssl.protocol = TLSv1.3 14:24:57 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.569643621Z level=info msg="Executing migration" id="Add column uid in team" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 policy-apex-pdp | ssl.provider = null 14:24:57 kafka | transaction.state.log.load.buffer.size = 5242880 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.573349971Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.70972ms 14:24:57 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 policy-apex-pdp | ssl.secure.random.implementation = null 14:24:57 kafka | transaction.state.log.min.isr = 2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.736205567Z level=info msg="Executing migration" id="Update uid column values in team" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:24:57 kafka | transaction.state.log.num.partitions = 50 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.736491111Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=288.934µs 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 policy-apex-pdp | ssl.truststore.certificates = null 14:24:57 kafka | transaction.state.log.replication.factor = 3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.782949893Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 policy-apex-pdp | ssl.truststore.location = null 14:24:57 kafka | transaction.state.log.segment.bytes = 104857600 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.783918026Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=974.463µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 policy-apex-pdp | ssl.truststore.password = null 14:24:57 kafka | transactional.id.expiration.ms = 604800000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.919521862Z level=info msg="Executing migration" id="create team member table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.provider = null 14:24:57 policy-apex-pdp | ssl.truststore.type = JKS 14:24:57 kafka | unclean.leader.election.enable = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.920569096Z level=info msg="Migration successfully executed" id="create team member table" duration=1.049544ms 14:24:57 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 policy-apex-pdp | transaction.timeout.ms = 60000 14:24:57 kafka | unstable.api.versions.enable = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.992652896Z level=info msg="Executing migration" id="add index team_member.org_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-apex-pdp | transactional.id = null 14:24:57 kafka | zookeeper.clientCnxnSocket = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:21.993797072Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.144326ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 kafka | zookeeper.connect = zookeeper:2181 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.280481552Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 policy-apex-pdp | 14:24:57 kafka | zookeeper.connection.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.282138355Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.659503ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.335+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:24:57 kafka | zookeeper.max.in.flight.requests = 10 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.409369795Z level=info msg="Executing migration" id="add index team_member.team_id" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 kafka | zookeeper.metadata.migration.enable = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.411007818Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.590392ms 14:24:57 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 14:24:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 kafka | zookeeper.metadata.migration.min.batch.size = 200 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.43394493Z level=info msg="Executing migration" id="Add column email to team table" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979351 14:24:57 kafka | zookeeper.session.timeout.ms = 18000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.441721096Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.774256ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 14:24:57 policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a70cbd7d-fac1-4b6c-9376-616c76b1b351, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:24:57 kafka | zookeeper.set.acl = false 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|ServiceManager|main] service manager starting set alive 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.546012275Z level=info msg="Executing migration" id="Add column external to team_member table" 14:24:57 kafka | zookeeper.ssl.cipher.suites = null 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054976448 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.553680899Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.669314ms 14:24:57 kafka | zookeeper.ssl.client.enable = false 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:56.449+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.354+00:00|INFO|ServiceManager|main] service manager starting topic sinks 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.621281089Z level=info msg="Executing migration" id="Add column permission to team_member table" 14:24:57 kafka | zookeeper.ssl.crl.enable = false 14:24:57 policy-pap | [2024-04-25T14:22:56.768+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.354+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.628527768Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.247089ms 14:24:57 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 14:24:57 kafka | zookeeper.ssl.enabled.protocols = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.364+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.857564503Z level=info msg="Executing migration" id="create dashboard acl table" 14:24:57 policy-pap | [2024-04-25T14:22:56.921+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.859239566Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.678113ms 14:24:57 policy-pap | [2024-04-25T14:22:57.157+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6a3a56de, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2ed84be9, org.springframework.security.web.context.SecurityContextHolderFilter@23d23d98, org.springframework.security.web.header.HeaderWriterFilter@7d483ebe, org.springframework.security.web.authentication.logout.LogoutFilter@762f8ff6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5e34a84b, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@40db6136, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6ee1ddcf, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@400e741, org.springframework.security.web.access.ExceptionTranslationFilter@21ba0d33, org.springframework.security.web.access.intercept.AuthorizationFilter@522f0bb8] 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 kafka | zookeeper.ssl.keystore.location = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.964660521Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 14:24:57 policy-pap | [2024-04-25T14:22:57.920+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | zookeeper.ssl.keystore.password = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:22.96612215Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.462359ms 14:24:57 policy-pap | [2024-04-25T14:22:58.040+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:24:57 policy-db-migrator | 14:24:57 kafka | zookeeper.ssl.keystore.type = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.187095517Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 14:24:57 policy-pap | [2024-04-25T14:22:58.068+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 14:24:57 policy-db-migrator | 14:24:57 kafka | zookeeper.ssl.ocsp.enable = false 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Create REST server 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.188647088Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.551661ms 14:24:57 policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting 14:24:57 policy-db-migrator | > upgrade 0630-toscanodetype.sql 14:24:57 kafka | zookeeper.ssl.protocol = TLSv1.2 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.395+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.305546197Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 14:24:57 policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | zookeeper.ssl.truststore.location = null 14:24:57 policy-apex-pdp | [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.30643788Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=894.293µs 14:24:57 policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 14:24:57 kafka | zookeeper.ssl.truststore.password = null 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.397+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.427460176Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 14:24:57 policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | zookeeper.ssl.truststore.type = null 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4fafdadb-f031-4653-ad75-cc11e2020b8b","timestampMs":1714054979366,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.428402738Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=948.722µs 14:24:57 policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 14:24:57 policy-db-migrator | 14:24:57 kafka | (kafka.server.KafkaConfig) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.652+00:00|INFO|ServiceManager|main] service manager starting Rest Server 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.487689935Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 14:24:57 policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:23,560] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|ServiceManager|main] service manager starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.489328748Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.641973ms 14:24:57 policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 14:24:57 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 14:24:57 kafka | [2024-04-25 14:22:23,561] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.832055249Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 14:24:57 policy-pap | [2024-04-25T14:22:58.087+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@206b959c 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:23,562] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.833287187Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.234638ms 14:24:57 policy-pap | [2024-04-25T14:22:58.098+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 14:24:57 kafka | [2024-04-25 14:22:23,566] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ServiceManager|main] service manager started 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.884575504Z level=info msg="Executing migration" id="add index dashboard_permission" 14:24:57 policy-pap | [2024-04-25T14:22:58.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:23,592] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ServiceManager|main] service manager started 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:23.885467836Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=895.772µs 14:24:57 policy-pap | allow.auto.create.topics = true 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:23,597] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.208060224Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 14:24:57 policy-pap | auto.commit.interval.ms = 5000 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:23,604] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.208975627Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=919.493µs 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:24:57 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 14:24:57 kafka | [2024-04-25 14:22:23,605] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 14:24:57 policy-pap | auto.offset.reset = latest 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.349454657Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.851+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:23,606] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.349932754Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=481.787µs 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.851+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 kafka | [2024-04-25 14:22:23,616] INFO Starting the log cleaner (kafka.log.LogCleaner) 14:24:57 policy-pap | check.crcs = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.629822061Z level=info msg="Executing migration" id="create tag table" 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.853+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:23,657] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.631077298Z level=info msg="Migration successfully executed" id="create tag table" duration=1.258166ms 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:23,672] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 14:24:57 policy-pap | client.id = consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.810730572Z level=info msg="Executing migration" id="add index tag.key_value" 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.861+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] (Re-)joining group 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:23,681] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 14:24:57 policy-pap | client.rack = 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.812406334Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.677592ms 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Request joining group due to: need to re-join with the given member-id: consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b 14:24:57 policy-db-migrator | > upgrade 0660-toscaparameter.sql 14:24:57 kafka | [2024-04-25 14:22:23,713] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.888649441Z level=info msg="Executing migration" id="create login attempt table" 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,139] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:24:57 policy-pap | default.api.timeout.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.889956049Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.309448ms 14:24:57 policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] (Re-)joining group 14:24:57 kafka | [2024-04-25 14:22:24,157] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 14:24:57 policy-pap | enable.auto.commit = true 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.9400011Z level=info msg="Executing migration" id="add index login_attempt.username" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:00.311+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 14:24:57 kafka | [2024-04-25 14:22:24,157] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:24:57 policy-pap | exclude.internal.topics = true 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:24.941134695Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.159145ms 14:24:57 policy-apex-pdp | [2024-04-25T14:23:00.313+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 14:24:57 policy-pap | fetch.max.bytes = 52428800 14:24:57 kafka | [2024-04-25 14:22:24,162] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.073182731Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.908+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b', protocol='range'} 14:24:57 policy-pap | fetch.max.wait.ms = 500 14:24:57 kafka | [2024-04-25 14:22:24,177] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.074638741Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.45923ms 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.918+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Finished assignment for group at generation 1: {consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b=Assignment(partitions=[policy-pdp-pap-0])} 14:24:57 policy-pap | fetch.min.bytes = 1 14:24:57 kafka | [2024-04-25 14:22:24,198] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 policy-db-migrator | > upgrade 0670-toscapolicies.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.119058185Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.947+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b', protocol='range'} 14:24:57 kafka | [2024-04-25 14:22:24,199] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.137471405Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=18.41676ms 14:24:57 policy-pap | group.id = b957469a-2969-4bff-8555-1bfe3e4d4da0 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:24:57 kafka | [2024-04-25 14:22:24,200] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | group.instance.id = null 14:24:57 kafka | [2024-04-25 14:22:24,200] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Adding newly assigned partitions: policy-pdp-pap-0 14:24:57 policy-pap | heartbeat.interval.ms = 3000 14:24:57 kafka | [2024-04-25 14:22:24,201] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 kafka | [2024-04-25 14:22:24,212] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 14:24:57 policy-apex-pdp | [2024-04-25T14:23:02.993+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Found no committed offset for partition policy-pdp-pap-0 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 kafka | [2024-04-25 14:22:24,213] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 14:24:57 policy-apex-pdp | [2024-04-25T14:23:03.034+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:24:57 policy-pap | internal.leave.group.on.close = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.333286838Z level=info msg="Executing migration" id="create login_attempt v2" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.364+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,245] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.334505855Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.219667ms 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 14:24:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 kafka | [2024-04-25 14:22:24,282] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714054944255,1714054944255,1,0,0,72057618239062017,258,0,27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.415761071Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.386+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | isolation.level = read_uncommitted 14:24:57 kafka | (kafka.zk.KafkaZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.41718971Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.428979ms 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 kafka | [2024-04-25 14:22:24,284] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.574774232Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.389+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | max.partition.fetch.bytes = 1048576 14:24:57 kafka | [2024-04-25 14:22:24,547] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.57524874Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=475.688µs 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.540+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.interval.ms = 300000 14:24:57 kafka | [2024-04-25 14:22:24,553] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.644076065Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 14:24:57 policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.records = 500 14:24:57 kafka | [2024-04-25 14:22:24,563] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.645133609Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.057914ms 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.558+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 14:24:57 policy-db-migrator | > upgrade 0690-toscapolicy.sql 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 kafka | [2024-04-25 14:22:24,564] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.785680131Z level=info msg="Executing migration" id="create user auth table" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.559+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metric.reporters = [] 14:24:57 kafka | [2024-04-25 14:22:24,580] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.78708183Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.403769ms 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 kafka | [2024-04-25 14:22:24,636] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.847737355Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.559+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 kafka | [2024-04-25 14:22:24,633] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:25.849242735Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.50427ms 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 kafka | [2024-04-25 14:22:24,658] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.072863256Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | 14:24:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 kafka | [2024-04-25 14:22:24,748] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.072978298Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=118.462µs 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 14:24:57 policy-pap | receive.buffer.bytes = 65536 14:24:57 kafka | [2024-04-25 14:22:24,750] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.214608074Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 kafka | [2024-04-25 14:22:24,750] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.222962568Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.359554ms 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 kafka | [2024-04-25 14:22:24,753] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.300958099Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 kafka | [2024-04-25 14:22:24,774] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.310057202Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=9.103793ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 kafka | [2024-04-25 14:22:24,797] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.511610983Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.569+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 kafka | [2024-04-25 14:22:24,800] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.521191013Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=9.57982ms 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.593+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 14:24:57 kafka | [2024-04-25 14:22:24,803] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.603621404Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,808] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.595+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.612753449Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=9.137105ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 kafka | [2024-04-25 14:22:24,813] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.835530297Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 kafka | [2024-04-25 14:22:24,817] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:26.837352552Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.826385ms 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.605+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.028648263Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 policy-db-migrator | 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 kafka | [2024-04-25 14:22:24,830] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.034808998Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.164644ms 14:24:57 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.606+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:24:57 kafka | [2024-04-25 14:22:24,830] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.218843439Z level=info msg="Executing migration" id="create server_lock table" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:24,838] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.220508122Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.663043ms 14:24:57 policy-pap | sasl.login.class = null 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.348288059Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,842] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.349867681Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.580802ms 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.718+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.434073585Z level=info msg="Executing migration" id="create user auth token table" 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 policy-db-migrator | > upgrade 0730-toscaproperty.sql 14:24:57 kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 14:24:57 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.435641837Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.568152ms 14:24:57 kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 14:24:57 policy-apex-pdp | [2024-04-25T14:23:19.718+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.627546906Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 14:24:57 kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,851] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 policy-apex-pdp | [2024-04-25T14:23:56.157+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.5 - policyadmin [25/Apr/2024:14:23:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.629710045Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.164709ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 policy-apex-pdp | [2024-04-25T14:24:56.079+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.5 - policyadmin [25/Apr/2024:14:24:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.51.2" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.754748186Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 14:24:57 kafka | [2024-04-25 14:22:24,856] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.755996263Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.250237ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,857] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.768220878Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 14:24:57 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 14:24:57 kafka | [2024-04-25 14:22:24,858] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.769904732Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.683594ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 kafka | [2024-04-25 14:22:24,858] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.778862783Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 14:24:57 kafka | [2024-04-25 14:22:24,858] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.784301257Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.437954ms 14:24:57 kafka | [2024-04-25 14:22:24,859] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.787742644Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,862] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.788754888Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.012254ms 14:24:57 kafka | [2024-04-25 14:22:24,862] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.792796483Z level=info msg="Executing migration" id="create cache_data table" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,869] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.793644374Z level=info msg="Migration successfully executed" id="create cache_data table" duration=847.141µs 14:24:57 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 14:24:57 kafka | [2024-04-25 14:22:24,869] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.886386716Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,870] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.88895844Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=2.570954ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 14:24:57 kafka | [2024-04-25 14:22:24,872] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.898112135Z level=info msg="Executing migration" id="create short_url table v1" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,872] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.898999887Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=887.922µs 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,872] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.904465581Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.906075633Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.609752ms 14:24:57 kafka | [2024-04-25 14:22:24,878] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 14:24:57 policy-pap | security.providers = null 14:24:57 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.910566504Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 14:24:57 kafka | [2024-04-25 14:22:24,879] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,884] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 14:24:57 policy-pap | session.timeout.ms = 45000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.910627635Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=63.401µs 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 kafka | [2024-04-25 14:22:24,885] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.914977504Z level=info msg="Executing migration" id="delete alert_definition table" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,892] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.915054145Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=76.761µs 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,892] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.91911313Z level=info msg="Executing migration" id="recreate alert_definition table" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,892] INFO Kafka startTimeMs: 1714054944883 (org.apache.kafka.common.utils.AppInfoParser) 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.92052015Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.403619ms 14:24:57 policy-db-migrator | > upgrade 0770-toscarequirement.sql 14:24:57 kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.925827092Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,894] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.927392072Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.5656ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 14:24:57 kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.key.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.937281717Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.938321371Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.039174ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,895] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.943863246Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:24,896] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.943954908Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=92.052µs 14:24:57 policy-db-migrator | > upgrade 0780-toscarequirements.sql 14:24:57 kafka | [2024-04-25 14:22:24,916] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.948000694Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,926] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.949504044Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.50572ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 14:24:57 kafka | [2024-04-25 14:22:24,933] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.953831963Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:24,981] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.954751505Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=919.502µs 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:29,917] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.959093273Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 14:24:57 policy-pap | ssl.provider = null 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:29,918] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.960143288Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.014645ms 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 14:24:57 kafka | [2024-04-25 14:22:58,591] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.96398409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,598] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.965032375Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.045985ms 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:24:57 kafka | [2024-04-25 14:22:58,639] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.971138258Z level=info msg="Executing migration" id="Add column paused in alert_definition" 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,648] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.978145233Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.006224ms 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,665] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UDjaTEkFR6iaxHll2hUQXA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.982664914Z level=info msg="Executing migration" id="drop alert_definition table" 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,666] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.983350413Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=683.259µs 14:24:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 14:24:57 kafka | [2024-04-25 14:22:58,667] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.98676158Z level=info msg="Executing migration" id="delete alert_definition_version table" 14:24:57 policy-pap | 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,668] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.986836971Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=75.641µs 14:24:57 policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 14:24:57 kafka | [2024-04-25 14:22:58,671] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.989676839Z level=info msg="Executing migration" id="recreate alert_definition_version table" 14:24:57 policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,671] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.99115574Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.478431ms 14:24:57 policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978104 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,704] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.996104847Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 14:24:57 policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Subscribed to topic(s): policy-pdp-pap 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,706] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:27.997275333Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.170886ms 14:24:57 policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 14:24:57 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 14:24:57 kafka | [2024-04-25 14:22:58,707] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.000823521Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 14:24:57 policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@54b35809 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,709] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.001849855Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.026104ms 14:24:57 policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:24:57 kafka | [2024-04-25 14:22:58,710] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.008656347Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:24:57 kafka | [2024-04-25 14:22:58,710] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.00886936Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=212.613µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | allow.auto.create.topics = true 14:24:57 kafka | [2024-04-25 14:22:58,714] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.015941287Z level=info msg="Executing migration" id="drop alert_definition_version table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | auto.commit.interval.ms = 5000 14:24:57 kafka | [2024-04-25 14:22:58,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.017111793Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.169926ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.02060382Z level=info msg="Executing migration" id="create alert_instance table" 14:24:57 kafka | [2024-04-25 14:22:58,727] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(Z-ljZKLXR-y1QhXAaAKdbg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:24:57 policy-db-migrator | > upgrade 0820-toscatrigger.sql 14:24:57 policy-pap | auto.offset.reset = latest 14:24:57 kafka | [2024-04-25 14:22:58,728] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.021584704Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=978.954µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 kafka | [2024-04-25 14:22:58,729] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.025659419Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:24:57 policy-pap | check.crcs = true 14:24:57 kafka | [2024-04-25 14:22:58,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.026696743Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.036304ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.029968408Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | client.id = consumer-policy-pap-4 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.031017832Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.048814ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | client.rack = 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.034948846Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 14:24:57 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.040855156Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.90538ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | default.api.timeout.ms = 60000 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.047251002Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 14:24:57 policy-pap | enable.auto.commit = true 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.048673531Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.422349ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | exclude.internal.topics = true 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.056421827Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | fetch.max.bytes = 52428800 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.05733569Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=916.303µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | fetch.max.wait.ms = 500 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.062027123Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 14:24:57 policy-pap | fetch.min.bytes = 1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.091286801Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.259398ms 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | group.id = policy-pap 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.095510808Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 14:24:57 policy-pap | group.instance.id = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.121830377Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.317579ms 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | heartbeat.interval.ms = 3000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.125681029Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 14:24:57 kafka | [2024-04-25 14:22:58,732] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.126355648Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=673.699µs 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | internal.leave.group.on.close = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.130073159Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 14:24:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.130803018Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=728.629µs 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | isolation.level = read_uncommitted 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.136920581Z level=info msg="Executing migration" id="add current_reason column related to current_state" 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 14:24:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.142546098Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.625167ms 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | max.partition.fetch.bytes = 1048576 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.146042685Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 14:24:57 kafka | [2024-04-25 14:22:58,733] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.interval.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.152510344Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.467319ms 14:24:57 kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | max.poll.records = 500 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.159099093Z level=info msg="Executing migration" id="create alert_rule table" 14:24:57 kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.159978914Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=878.841µs 14:24:57 kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metric.reporters = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.164842201Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 14:24:57 kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.165784463Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=936.962µs 14:24:57 kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.170937824Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 14:24:57 kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.172630497Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.689994ms 14:24:57 kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.208038578Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 14:24:57 policy-pap | receive.buffer.bytes = 65536 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.209934734Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.893896ms 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.215590421Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.215657502Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.661µs 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.222461454Z level=info msg="Executing migration" id="add column for to alert_rule" 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.230732686Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.275432ms 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.235485361Z level=info msg="Executing migration" id="add column annotations to alert_rule" 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.239705409Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.219368ms 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.242954933Z level=info msg="Executing migration" id="add column labels to alert_rule" 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.252974999Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.013335ms 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 policy-pap | sasl.login.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.261503744Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.262716351Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.209787ms 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.309485178Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.311085569Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.600381ms 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.315509589Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.32593819Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=10.429361ms 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.336770668Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.343809114Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.038186ms 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.348935814Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.349927837Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=991.793µs 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.355508323Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | security.providers = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.362291035Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.782742ms 14:24:57 kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.369672396Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 14:24:57 kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | session.timeout.ms = 45000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.376057472Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.384226ms 14:24:57 kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.43542156Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 14:24:57 kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.435706613Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=286.413µs 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.441962678Z level=info msg="Executing migration" id="create alert_rule_version table" 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.443930536Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.963967ms 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.450675597Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.451828412Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.152675ms 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.455800107Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.456923472Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.123405ms 14:24:57 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.461750597Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.462024161Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=273.304µs 14:24:57 policy-pap | ssl.key.password = null 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.470554487Z level=info msg="Executing migration" id="add column for to alert_rule_version" 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.479436347Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.88255ms 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.485433369Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 14:24:57 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.491823746Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.376647ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.494876428Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 14:24:57 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.502574782Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.697834ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.508764057Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.515256724Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.488597ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.provider = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.518877523Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 14:24:57 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.524992486Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.110773ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.528626297Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 14:24:57 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.528686308Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=60.161µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.533693226Z level=info msg="Executing migration" id=create_alert_configuration_table 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.534879412Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.182366ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.543061723Z level=info msg="Executing migration" id="Add column default in alert_configuration" 14:24:57 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 14:24:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:24:57 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.553558686Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.499063ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.558106607Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 14:24:57 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.558153768Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=47.511µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.563159046Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978109 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.570646938Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.487032ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.578102759Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 14:24:57 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 14:24:57 policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|ServiceManager|main] Policy PAP starting topics 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.579242165Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.138067ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.58551117Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 14:24:57 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.59290987Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.39889ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c9f2f9e4-219a-4a9f-8132-76e678fa712c, alive=false, publisher=null]]: starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.597836247Z level=info msg="Executing migration" id=create_ngalert_configuration_table 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.125+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.598698359Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=861.652µs 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | acks = -1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.627271718Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 14:24:57 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.628955391Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.683363ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | batch.size = 16384 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.660017953Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 14:24:57 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.670270782Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.253529ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | buffer.memory = 33554432 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.675638055Z level=info msg="Executing migration" id="create provenance_type table" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.676407686Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=770.531µs 14:24:57 policy-db-migrator | 14:24:57 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:24:57 kafka | [2024-04-25 14:22:58,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | client.id = producer-1 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.682113823Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 14:24:57 kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | compression.type = none 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.683097436Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=981.273µs 14:24:57 kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.687455255Z level=info msg="Executing migration" id="create alert_image table" 14:24:57 kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | delivery.timeout.ms = 120000 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | enable.idempotence = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.688397298Z level=info msg="Migration successfully executed" id="create alert_image table" duration=941.833µs 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.696986075Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.699168975Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=2.18253ms 14:24:57 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 14:24:57 kafka | [2024-04-25 14:22:58,749] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-pap | linger.ms = 0 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.707389427Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | max.block.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.707457488Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=67.981µs 14:24:57 kafka | [2024-04-25 14:22:58,749] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | max.in.flight.requests.per.connection = 5 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.759966691Z level=info msg="Executing migration" id=create_alert_configuration_history_table 14:24:57 kafka | [2024-04-25 14:22:58,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | max.request.size = 1048576 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.761577353Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.625742ms 14:24:57 kafka | [2024-04-25 14:22:58,750] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.768309964Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 14:24:57 kafka | [2024-04-25 14:22:58,799] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | metadata.max.idle.ms = 300000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.77016014Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.850486ms 14:24:57 kafka | [2024-04-25 14:22:58,809] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 14:24:57 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 14:24:57 policy-pap | metric.reporters = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.774265935Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 14:24:57 kafka | [2024-04-25 14:22:58,812] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.775276009Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 14:24:57 kafka | [2024-04-25 14:22:58,813] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.781621395Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 14:24:57 kafka | [2024-04-25 14:22:58,814] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UDjaTEkFR6iaxHll2hUQXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.782230734Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=608.958µs 14:24:57 kafka | [2024-04-25 14:22:58,834] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | partitioner.adaptive.partitioning.enable = true 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.788244946Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 14:24:57 kafka | [2024-04-25 14:22:58,842] INFO [Broker id=1] Finished LeaderAndIsr request in 128ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | partitioner.availability.timeout.ms = 0 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.789919359Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.674054ms 14:24:57 kafka | [2024-04-25 14:22:58,845] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=UDjaTEkFR6iaxHll2hUQXA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 14:24:57 policy-pap | partitioner.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.79369658Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.80031499Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.61775ms 14:24:57 kafka | [2024-04-25 14:22:58,854] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | partitioner.ignore.keys = false 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.805239167Z level=info msg="Executing migration" id="create library_element table v1" 14:24:57 kafka | [2024-04-25 14:22:58,855] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | receive.buffer.bytes = 32768 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.806299611Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.059674ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.811715665Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.81283807Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.122025ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.818600898Z level=info msg="Executing migration" id="create library_element_connection table v1" 14:24:57 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:24:57 policy-pap | retries = 2147483647 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.820759248Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=2.15723ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.82757247Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.828624025Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.051265ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.835204214Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.837647277Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=2.441793ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.842287751Z level=info msg="Executing migration" id="increase max description length to 2048" 14:24:57 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.842327481Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=40.9µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.846159593Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 14:24:57 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.846328065Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=167.902µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.851602727Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.852217145Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=614.508µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.859042858Z level=info msg="Executing migration" id="create data_keys table" 14:24:57 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.860859262Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.817424ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.866676371Z level=info msg="Executing migration" id="create secrets table" 14:24:57 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.867644295Z level=info msg="Migration successfully executed" id="create secrets table" duration=967.964µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.873413874Z level=info msg="Executing migration" id="rename data_keys name column to id" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.908104955Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.692161ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.912820229Z level=info msg="Executing migration" id="add name column into data_keys" 14:24:57 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.917885698Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.064799ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.92170627Z level=info msg="Executing migration" id="copy data_keys id column values into name" 14:24:57 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.921934963Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=228.043µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.926455035Z level=info msg="Executing migration" id="rename data_keys name column to label" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.960618089Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.162684ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:58,857] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.966450149Z level=info msg="Executing migration" id="rename data_keys id column back to name" 14:24:57 policy-db-migrator | > upgrade 0100-pdp.sql 14:24:57 kafka | [2024-04-25 14:22:59,013] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:28.99820281Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.752771ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.023654895Z level=info msg="Executing migration" id="create kv_store table v1" 14:24:57 kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.02545985Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.804735ms 14:24:57 kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.034544224Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 14:24:57 kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.035670719Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.125005ms 14:24:57 kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.041880894Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.042253459Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=372.385µs 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | security.providers = null 14:24:57 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.046406566Z level=info msg="Executing migration" id="create permission table" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.047295717Z level=info msg="Migration successfully executed" id="create permission table" duration=888.332µs 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.05415155Z level=info msg="Executing migration" id="add unique index permission.role_id" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.055734442Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.586132ms 14:24:57 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.060501317Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.063065642Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.565685ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.074972724Z level=info msg="Executing migration" id="create role table" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.key.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.076062408Z level=info msg="Migration successfully executed" id="create role table" duration=1.086934ms 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.082668079Z level=info msg="Executing migration" id="add column display_name" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.092218068Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.561249ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.097391119Z level=info msg="Executing migration" id="add column group_name" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.104829219Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.43722ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.111646772Z level=info msg="Executing migration" id="add index role.org_id" 14:24:57 kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.112792727Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.146895ms 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.120569133Z level=info msg="Executing migration" id="add unique index role_org_id_name" 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.provider = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.122313107Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.743414ms 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.126661516Z level=info msg="Executing migration" id="add index role_org_id_uid" 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.127822662Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.160946ms 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.273671505Z level=info msg="Executing migration" id="create team role table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.276115658Z level=info msg="Migration successfully executed" id="create team role table" duration=2.445553ms 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.370345988Z level=info msg="Executing migration" id="add index team_role.org_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.372299765Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.955917ms 14:24:57 kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | transaction.timeout.ms = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.437070576Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | transactional.id = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.439394007Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.322681ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.577285032Z level=info msg="Executing migration" id="add index team_role.team_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.579179108Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.901137ms 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.138+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.754719363Z level=info msg="Executing migration" id="create user role table" 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.756806351Z level=info msg="Migration successfully executed" id="create user role table" duration=2.089718ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.865616511Z level=info msg="Executing migration" id="add index user_role.org_id" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978155 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.86775684Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.13367ms 14:24:57 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 14:24:57 policy-pap | [2024-04-25T14:22:58.156+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c9f2f9e4-219a-4a9f-8132-76e678fa712c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.89348161Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:58.156+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f8b2bf2-7d12-44fd-abe8-8a8ee96c9ee3, alive=false, publisher=null]]: starting 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.895526058Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.042178ms 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 14:24:57 policy-pap | [2024-04-25T14:22:58.157+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.99431916Z level=info msg="Executing migration" id="add index user_role.user_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | acks = -1 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:29.996555151Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.239731ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | auto.include.jmx.reporter = true 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.045444995Z level=info msg="Executing migration" id="create builtin role table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | batch.size = 16384 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.047006067Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.559642ms 14:24:57 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 14:24:57 policy-pap | bootstrap.servers = [kafka:9092] 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.296081852Z level=info msg="Executing migration" id="add index builtin_role.role_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | buffer.memory = 33554432 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.297915317Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.834024ms 14:24:57 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 14:24:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.590822777Z level=info msg="Executing migration" id="add index builtin_role.name" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | client.id = producer-2 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.59318455Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.365422ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | compression.type = none 14:24:57 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.660195801Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | connections.max.idle.ms = 540000 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.671609336Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.411695ms 14:24:57 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 14:24:57 policy-pap | delivery.timeout.ms = 120000 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.674687258Z level=info msg="Executing migration" id="add index builtin_role.org_id" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | enable.idempotence = true 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.675784832Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.092984ms 14:24:57 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 14:24:57 policy-pap | interceptor.classes = [] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.68149837Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 14:24:57 policy-db-migrator | JOIN pdpstatistics b 14:24:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.682668717Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.169287ms 14:24:57 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 14:24:57 policy-pap | linger.ms = 0 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.687131657Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 14:24:57 policy-db-migrator | SET a.id = b.id 14:24:57 policy-pap | max.block.ms = 60000 14:24:57 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.688259143Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.127246ms 14:24:57 policy-pap | max.in.flight.requests.per.connection = 5 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.694781401Z level=info msg="Executing migration" id="add unique index role.uid" 14:24:57 policy-pap | max.request.size = 1048576 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.696976611Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.20109ms 14:24:57 policy-pap | metadata.max.age.ms = 300000 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.705194093Z level=info msg="Executing migration" id="create seed assignment table" 14:24:57 policy-pap | metadata.max.idle.ms = 300000 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.707069777Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.878054ms 14:24:57 policy-pap | metric.reporters = [] 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.711862653Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 14:24:57 policy-pap | metrics.num.samples = 2 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 14:24:57 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.712950208Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.087185ms 14:24:57 policy-pap | metrics.recording.level = INFO 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.717452409Z level=info msg="Executing migration" id="add column hidden to role table" 14:24:57 policy-pap | metrics.sample.window.ms = 30000 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.725488298Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.035479ms 14:24:57 policy-pap | partitioner.adaptive.partitioning.enable = true 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.731747993Z level=info msg="Executing migration" id="permission kind migration" 14:24:57 policy-pap | partitioner.availability.timeout.ms = 0 14:24:57 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.743361931Z level=info msg="Migration successfully executed" id="permission kind migration" duration=11.617468ms 14:24:57 policy-pap | partitioner.class = null 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.748868036Z level=info msg="Executing migration" id="permission attribute migration" 14:24:57 policy-pap | partitioner.ignore.keys = false 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.754569673Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.701407ms 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 14:24:57 policy-pap | receive.buffer.bytes = 32768 14:24:57 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.758019761Z level=info msg="Executing migration" id="permission identifier migration" 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | reconnect.backoff.max.ms = 1000 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.765874377Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.854126ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | reconnect.backoff.ms = 50 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.809769543Z level=info msg="Executing migration" id="add permission identifier index" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 14:24:57 policy-pap | request.timeout.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.812105095Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=2.334852ms 14:24:57 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 14:24:57 policy-pap | retries = 2147483647 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.818722645Z level=info msg="Executing migration" id="add permission action scope role_id index" 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 14:24:57 policy-pap | retry.backoff.ms = 100 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.820208506Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.48467ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 14:24:57 policy-pap | sasl.client.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.829196817Z level=info msg="Executing migration" id="remove permission role_id action scope index" 14:24:57 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 14:24:57 policy-pap | sasl.jaas.config = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.830304182Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.110265ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 14:24:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.834372088Z level=info msg="Executing migration" id="create query_history table v1" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 14:24:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.835440022Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.067334ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 14:24:57 policy-pap | sasl.kerberos.service.name = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.839078362Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 14:24:57 policy-db-migrator | > upgrade 0210-sequence.sql 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 14:24:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.840210567Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.131315ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 14:24:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.846932148Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 14:24:57 policy-pap | sasl.login.callback.handler.class = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.84707247Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=139.802µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.class = null 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.85067954Z level=info msg="Executing migration" id="rbac disabled migrator" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.connect.timeout.ms = null 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.85072895Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=51µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.read.timeout.ms = null 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.855080929Z level=info msg="Executing migration" id="teams permissions migration" 14:24:57 policy-db-migrator | > upgrade 0220-sequence.sql 14:24:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.85584516Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=763.811µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.862699943Z level=info msg="Executing migration" id="dashboard permissions" 14:24:57 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:24:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.863863778Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.165835ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:24:57 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.869932701Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.871181248Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.255007ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.877456433Z level=info msg="Executing migration" id="drop managed folder create actions" 14:24:57 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 14:24:57 policy-pap | sasl.mechanism = GSSAPI 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.87794804Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=491.976µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.882910647Z level=info msg="Executing migration" id="alerting notification permissions" 14:24:57 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 14:24:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.883459674Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=548.877µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.886899842Z level=info msg="Executing migration" id="create query_history_star table v1" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.888018477Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.118115ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.892028721Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 14:24:57 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.893903667Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.874556ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.900655029Z level=info msg="Executing migration" id="add column org_id in query_history_star" 14:24:57 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 14:24:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.909516169Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.8605ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.913674195Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.913742406Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=71.031µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | security.protocol = PLAINTEXT 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 14:24:57 policy-db-migrator | > upgrade 0120-toscatrigger.sql 14:24:57 policy-pap | security.providers = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.918898996Z level=info msg="Executing migration" id="create correlation table v1" 14:24:57 policy-pap | send.buffer.bytes = 131072 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.920986655Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.070019ms 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.928568187Z level=info msg="Executing migration" id="add index correlations.uid" 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 14:24:57 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 14:24:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.929679873Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.111756ms 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | ssl.cipher.suites = null 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.934058822Z level=info msg="Executing migration" id="add index correlations.source_uid" 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:24:57 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.937998226Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=3.934144ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:24:57 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.942092651Z level=info msg="Executing migration" id="add correlation config column" 14:24:57 kafka | [2024-04-25 14:22:59,020] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 14:24:57 policy-pap | ssl.engine.factory.class = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.948678091Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.58486ms 14:24:57 kafka | [2024-04-25 14:22:59,021] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 14:24:57 policy-pap | ssl.key.password = null 14:24:57 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.953741999Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.954813335Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.071436ms 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keystore.certificate.chain = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.959183613Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keystore.key = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.960275049Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.094216ms 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keystore.location = null 14:24:57 policy-db-migrator | > upgrade 0140-toscaparameter.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.965082574Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keystore.password = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.9890671Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.982506ms 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.keystore.type = JKS 14:24:57 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.991942709Z level=info msg="Executing migration" id="create correlation v2" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.protocol = TLSv1.3 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.992852131Z level=info msg="Migration successfully executed" id="create correlation v2" duration=908.542µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.provider = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.995616289Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.secure.random.implementation = null 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:30.996363249Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=746.771µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:24:57 policy-db-migrator | > upgrade 0150-toscaproperty.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.001674982Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.truststore.certificates = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.002510603Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=835.421µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.truststore.location = null 14:24:57 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.008418593Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.truststore.password = null 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.009219384Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=800.51µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | ssl.truststore.type = JKS 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.014224942Z level=info msg="Executing migration" id="copy correlation v1 to v2" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | transaction.timeout.ms = 60000 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.014469515Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=245.043µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | transactional.id = null 14:24:57 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.020932582Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.022230641Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.296569ms 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.026022322Z level=info msg="Executing migration" id="add provisioning column" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.157+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.037040762Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.0186ms 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:24:57 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.041384371Z level=info msg="Executing migration" id="create entity_events table" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.042278553Z level=info msg="Migration successfully executed" id="create entity_events table" duration=891.382µs 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978161 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.047127869Z level=info msg="Executing migration" id="create dashboard public config v1" 14:24:57 kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f8b2bf2-7d12-44fd-abe8-8a8ee96c9ee3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.048185793Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.058124ms 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 14:24:57 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.070907702Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.071683213Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.163+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 14:24:57 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.078348523Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.168+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.078815709Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 14:24:57 policy-db-migrator | 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.084564197Z level=info msg="Executing migration" id="Drop old dashboard public config table" 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 14:24:57 policy-db-migrator | -------------- 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.085862266Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.300019ms 14:24:57 policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 14:24:57 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.089808889Z level=info msg="Executing migration" id="recreate dashboard public config v1" 14:24:57 policy-pap | [2024-04-25T14:22:58.179+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.091539723Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.727114ms 14:24:57 policy-pap | [2024-04-25T14:22:58.180+00:00|INFO|TimerManager|Thread-9] timer manager update started 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.098912942Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 14:24:57 policy-pap | [2024-04-25T14:22:58.183+00:00|INFO|ServiceManager|main] Policy PAP started 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.100040738Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.128086ms 14:24:57 policy-pap | [2024-04-25T14:22:58.186+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.104696801Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:24:57 policy-pap | [2024-04-25T14:22:58.191+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.626 seconds (process running for 10.325) 14:24:57 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.106614557Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.917086ms 14:24:57 policy-pap | [2024-04-25T14:22:58.571+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.11271873Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 14:24:57 policy-pap | [2024-04-25T14:22:58.571+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.114018618Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.299688ms 14:24:57 policy-pap | [2024-04-25T14:22:58.572+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.120645208Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:24:57 policy-pap | [2024-04-25T14:22:58.573+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.122331981Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.686583ms 14:24:57 policy-pap | [2024-04-25T14:22:58.663+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.126963364Z level=info msg="Executing migration" id="Drop public config table" 14:24:57 policy-pap | [2024-04-25T14:22:58.664+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw 14:24:57 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.128263512Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.299529ms 14:24:57 policy-pap | [2024-04-25T14:22:58.683+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.131540356Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 14:24:57 policy-pap | [2024-04-25T14:22:58.702+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.132656671Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.116115ms 14:24:57 policy-pap | [2024-04-25T14:22:58.702+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.137325495Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 14:24:57 policy-pap | [2024-04-25T14:22:58.785+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 14:24:57 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.138368288Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.041793ms 14:24:57 policy-pap | [2024-04-25T14:22:58.846+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.142023629Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:24:57 policy-pap | [2024-04-25T14:22:59.836+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:24:57 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.143127543Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.103684ms 14:24:57 policy-pap | [2024-04-25T14:22:59.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.148226233Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:59.871+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.149323197Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.096644ms 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:59.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:24:57 kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.154601829Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 14:24:57 policy-db-migrator | > upgrade 0100-upgrade.sql 14:24:57 policy-pap | [2024-04-25T14:22:59.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:24:57 kafka | [2024-04-25 14:22:59,024] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.178869849Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.26906ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:59.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:24:57 kafka | [2024-04-25 14:22:59,028] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.181473975Z level=info msg="Executing migration" id="add annotations_enabled column" 14:24:57 policy-db-migrator | select 'upgrade to 1100 completed' as msg 14:24:57 policy-pap | [2024-04-25T14:22:59.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] (Re-)joining group 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.189429293Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.954568ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Request joining group due to: need to re-join with the given member-id: consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.196053513Z level=info msg="Executing migration" id="add time_selection_enabled column" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.204996434Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.942281ms 14:24:57 policy-db-migrator | msg 14:24:57 policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] (Re-)joining group 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.208765676Z level=info msg="Executing migration" id="delete orphaned public dashboards" 14:24:57 policy-db-migrator | upgrade to 1100 completed 14:24:57 policy-pap | [2024-04-25T14:23:02.894+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a', protocol='range'} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.209007479Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=239.563µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:02.904+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167', protocol='range'} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.213303337Z level=info msg="Executing migration" id="add share column" 14:24:57 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 14:24:57 policy-pap | [2024-04-25T14:23:02.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a=Assignment(partitions=[policy-pdp-pap-0])} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.221704421Z level=info msg="Migration successfully executed" id="add share column" duration=8.400564ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:02.907+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Finished assignment for group at generation 1: {consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167=Assignment(partitions=[policy-pdp-pap-0])} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.225080527Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 14:24:57 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 14:24:57 policy-pap | [2024-04-25T14:23:02.947+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167', protocol='range'} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.22529311Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=212.083µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:02.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.230386729Z level=info msg="Executing migration" id="create file table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:02.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a', protocol='range'} 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.231329652Z level=info msg="Migration successfully executed" id="create file table" duration=942.323µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.235144124Z level=info msg="Executing migration" id="file table idx: path natural pk" 14:24:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:24:57 policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Adding newly assigned partitions: policy-pdp-pap-0 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.236698985Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.55297ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.241946166Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 14:24:57 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 14:24:57 policy-pap | [2024-04-25T14:23:02.996+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.243918493Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.971687ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:02.997+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Found no committed offset for partition policy-pdp-pap-0 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.25030005Z level=info msg="Executing migration" id="create file_meta table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:03.032+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.251080241Z level=info msg="Migration successfully executed" id="create file_meta table" duration=779.872µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:03.032+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.258600002Z level=info msg="Executing migration" id="file table idx: path key" 14:24:57 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 14:24:57 policy-pap | [2024-04-25T14:23:04.634+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.260339826Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.739314ms 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:04.634+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.265813241Z level=info msg="Executing migration" id="set path collation in file table" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:04.637+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.265878732Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=65.911µs 14:24:57 policy-db-migrator | 14:24:57 policy-pap | [2024-04-25T14:23:19.405+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.270175269Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 14:24:57 policy-db-migrator | > upgrade 0120-audit_sequence.sql 14:24:57 policy-pap | [] 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.27023964Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=64.611µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:19.406+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.276429435Z level=info msg="Executing migration" id="managed permissions migration" 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.277283027Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=856.732µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:19.406+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.282324745Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.28265198Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=327.635µs 14:24:57 policy-db-migrator | -------------- 14:24:57 policy-pap | [2024-04-25T14:23:19.417+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.286123787Z level=info msg="Executing migration" id="RBAC action name migrator" 14:24:57 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 14:24:57 policy-pap | [2024-04-25T14:23:19.501+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.288180825Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.056719ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.502+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting listener 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.291086084Z level=info msg="Executing migration" id="Add UID column to playlist" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.502+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting timer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.300053406Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.966502ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.503+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.305837325Z level=info msg="Executing migration" id="Update uid column values in playlist" 14:24:57 policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting enqueue 14:24:57 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.305997197Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=160.153µs 14:24:57 policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.313062573Z level=info msg="Executing migration" id="Add index for uid in playlist" 14:24:57 policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate started 14:24:57 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.314903748Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.844005ms 14:24:57 policy-pap | [2024-04-25T14:23:19.507+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.319003694Z level=info msg="Executing migration" id="update group index for alert rules" 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.319693253Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=691.179µs 14:24:57 policy-pap | [2024-04-25T14:23:19.544+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.327708112Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.328038847Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=331.495µs 14:24:57 policy-pap | [2024-04-25T14:23:19.545+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.332227424Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 14:24:57 policy-pap | [2024-04-25T14:23:19.547+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.332993064Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=765.44µs 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.336337209Z level=info msg="Executing migration" id="add action column to seed_assignment" 14:24:57 policy-pap | [2024-04-25T14:23:19.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:24:57 policy-db-migrator | TRUNCATE TABLE sequence 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.345763597Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.425678ms 14:24:57 policy-pap | [2024-04-25T14:23:19.567+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.351722119Z level=info msg="Executing migration" id="add scope column to seed_assignment" 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.360708371Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.986102ms 14:24:57 policy-pap | [2024-04-25T14:23:19.568+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.367523843Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 14:24:57 policy-pap | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.368281803Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=758.02µs 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.569+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.37096214Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 14:24:57 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping enqueue 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.443782489Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=72.821129ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping timer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.448855778Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.449652399Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=796.251µs 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping listener 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.452697681Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 14:24:57 policy-db-migrator | DROP TABLE pdpstatistics 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopped 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.453517512Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=819.321µs 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.573+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.459442842Z level=info msg="Executing migration" id="add primary key to seed_assigment" 14:24:57 policy-db-migrator | 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.485638908Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.197416ms 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate successful 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.580385576Z level=info msg="Executing migration" id="add origin column to seed_assignment" 14:24:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 start publishing next request 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.591805261Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=11.421235ms 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.596838729Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 14:24:57 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting listener 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.597159824Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=321.355µs 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting timer 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.603004413Z level=info msg="Executing migration" id="prevent seeding OnCall access" 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.603180335Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=176.872µs 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting enqueue 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.609840176Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 14:24:57 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange started 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.610167181Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=325.295µs 14:24:57 policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] 14:24:57 policy-db-migrator | DROP TABLE statistics_sequence 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.614319217Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 14:24:57 policy-pap | [2024-04-25T14:23:19.580+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | -------------- 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.614651501Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=332.824µs 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | 14:24:57 kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.620091605Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 14:24:57 policy-pap | [2024-04-25T14:23:19.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | policyadmin: OK: upgrade (1300) 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.62043289Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=341.755µs 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | name version 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.624887681Z level=info msg="Executing migration" id="create folder table" 14:24:57 policy-pap | [2024-04-25T14:23:19.681+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 14:24:57 policy-db-migrator | policyadmin 1300 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.626335121Z level=info msg="Migration successfully executed" id="create folder table" duration=1.447191ms 14:24:57 policy-pap | [2024-04-25T14:23:19.686+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 policy-db-migrator | ID script operation from_version to_version tag success atTime 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.632850179Z level=info msg="Executing migration" id="Add index for parent_uid" 14:24:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.634803456Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.952947ms 14:24:57 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.642704292Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 14:24:57 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.644663109Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.958347ms 14:24:57 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.64914371Z level=info msg="Executing migration" id="Update folder title length" 14:24:57 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping enqueue 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.649171741Z level=info msg="Migration successfully executed" id="Update folder title length" duration=28.871µs 14:24:57 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping timer 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.655622678Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 14:24:57 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f38f5279-b344-4d66-86a2-21ebfb9d4e55 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.656870225Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.246237ms 14:24:57 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.660835519Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 14:24:57 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping listener 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.662524892Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.689363ms 14:24:57 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopped 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.666562637Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 14:24:57 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange successful 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.667778013Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.215676ms 14:24:57 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 14:24:57 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 start publishing next request 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.674062389Z level=info msg="Executing migration" id="Sync dashboard and folder table" 14:24:57 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 14:24:57 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.674577556Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=514.577µs 14:24:57 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 14:24:57 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting listener 14:24:57 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.678738832Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 14:24:57 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 14:24:57 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting timer 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.679162858Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=423.746µs 14:24:57 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 14:24:57 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=92ed1daf-00dc-46f3-a934-a5b206758853, expireMs=1714055029691] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.684087915Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 14:24:57 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 14:24:57 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting enqueue 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.685809559Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.721414ms 14:24:57 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 14:24:57 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate started 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.690980579Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 14:24:57 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 14:24:57 policy-pap | [2024-04-25T14:23:19.692+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.692384378Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.402879ms 14:24:57 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.697907743Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 14:24:57 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 14:24:57 policy-pap | [2024-04-25T14:23:19.694+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.698969747Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.061924ms 14:24:57 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.70210637Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:24:57 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 14:24:57 policy-pap | [2024-04-25T14:23:19.694+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.703338897Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.231867ms 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:24:57 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 14:24:57 policy-pap | [2024-04-25T14:23:19.700+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.708184443Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:24:57 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 14:24:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.709995567Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.811844ms 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:24:57 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 14:24:57 policy-pap | [2024-04-25T14:23:19.700+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ecd1b796-0fe4-44b0-a7d5-d9c405fda44a 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.713912801Z level=info msg="Executing migration" id="create anon_device table" 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:24:57 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 14:24:57 policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:24:57 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.715498212Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.584691ms 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:24:57 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.721516274Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 14:24:57 policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:24:57 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.72271337Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.194666ms 14:24:57 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.728005192Z level=info msg="Executing migration" id="add index anon_device.updated_at" 14:24:57 policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:24:57 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.729246288Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.241186ms 14:24:57 policy-pap | [2024-04-25T14:23:19.704+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:24:57 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.73747377Z level=info msg="Executing migration" id="create signing_key table" 14:24:57 policy-pap | [2024-04-25T14:23:19.718+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:24:57 kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:24:57 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.738548436Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.016055ms 14:24:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:59,049] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 14:24:57 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.745157045Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:24:57 kafka | [2024-04-25 14:22:59,057] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 14:24:57 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.746879849Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.717044ms 14:24:57 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:24:57 kafka | [2024-04-25 14:22:59,068] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.753821973Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping 14:24:57 kafka | [2024-04-25 14:22:59,071] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.754926977Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.104714ms 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 92ed1daf-00dc-46f3-a934-a5b206758853 14:24:57 kafka | [2024-04-25 14:22:59,071] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.760223009Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping enqueue 14:24:57 kafka | [2024-04-25 14:22:59,072] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.760666796Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=445.167µs 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping timer 14:24:57 kafka | [2024-04-25 14:22:59,072] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.764504908Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=92ed1daf-00dc-46f3-a934-a5b206758853, expireMs=1714055029691] 14:24:57 kafka | [2024-04-25 14:22:59,084] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.775185723Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.681435ms 14:24:57 kafka | [2024-04-25 14:22:59,085] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping listener 14:24:57 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.778797242Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 14:24:57 kafka | [2024-04-25 14:22:59,085] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopped 14:24:57 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.77935832Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=561.777µs 14:24:57 kafka | [2024-04-25 14:22:59,085] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:19.723+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate successful 14:24:57 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.78531023Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 14:24:57 kafka | [2024-04-25 14:22:59,085] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:19.723+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 has no more requests 14:24:57 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.786441866Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.131206ms 14:24:57 kafka | [2024-04-25 14:22:59,094] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-pap | [2024-04-25T14:23:25.181+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 14:24:57 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.79185952Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 14:24:57 kafka | [2024-04-25 14:22:59,094] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-pap | [2024-04-25T14:23:25.229+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 14:24:57 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.794078659Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.219169ms 14:24:57 kafka | [2024-04-25 14:22:59,094] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:25.239+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 14:24:57 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.797878592Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 14:24:57 kafka | [2024-04-25 14:22:59,095] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:25.250+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 14:24:57 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.799173509Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.295577ms 14:24:57 kafka | [2024-04-25 14:22:59,095] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:25.657+00:00|INFO|SessionData|http-nio-6969-exec-5] unknown group testGroup 14:24:57 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.80583793Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 14:24:57 kafka | [2024-04-25 14:22:59,105] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-pap | [2024-04-25T14:23:26.155+00:00|INFO|SessionData|http-nio-6969-exec-5] create cached group testGroup 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.807220338Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.384118ms 14:24:57 kafka | [2024-04-25 14:22:59,106] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-pap | [2024-04-25T14:23:26.156+00:00|INFO|SessionData|http-nio-6969-exec-5] creating DB group testGroup 14:24:57 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.811653199Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 14:24:57 kafka | [2024-04-25 14:22:59,106] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:26.773+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 14:24:57 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.812825924Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.172345ms 14:24:57 policy-pap | [2024-04-25T14:23:26.990+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 14:24:57 kafka | [2024-04-25 14:22:59,106] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.820975835Z level=info msg="Executing migration" id="create sso_setting table" 14:24:57 policy-pap | [2024-04-25T14:23:27.081+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 14:24:57 kafka | [2024-04-25 14:22:59,106] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.822633368Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.654233ms 14:24:57 policy-pap | [2024-04-25T14:23:27.081+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 14:24:57 kafka | [2024-04-25 14:22:59,112] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.832911677Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 14:24:57 policy-pap | [2024-04-25T14:23:27.082+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 14:24:57 kafka | [2024-04-25 14:22:59,112] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,113] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.834102644Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.191997ms 14:24:57 policy-pap | [2024-04-25T14:23:27.096+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T14:23:26Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T14:23:27Z, user=policyadmin)] 14:24:57 kafka | [2024-04-25 14:22:59,113] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.837827474Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 14:24:57 policy-pap | [2024-04-25T14:23:27.813+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 14:24:57 kafka | [2024-04-25 14:22:59,113] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 14:24:57 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.83823581Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=409.336µs 14:24:57 policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 14:24:57 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.842907063Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 14:24:57 kafka | [2024-04-25 14:22:59,121] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 14:24:57 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.843012865Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=106.922µs 14:24:57 kafka | [2024-04-25 14:22:59,121] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 kafka | [2024-04-25 14:22:59,121] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:27.815+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.850195922Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 14:24:57 kafka | [2024-04-25 14:22:59,121] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:27.825+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T14:23:27Z, user=policyadmin)] 14:24:57 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.864491607Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=14.294865ms 14:24:57 kafka | [2024-04-25 14:22:59,121] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 14:24:57 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.869696867Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 14:24:57 kafka | [2024-04-25 14:22:59,132] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 14:24:57 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.88161293Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.916833ms 14:24:57 kafka | [2024-04-25 14:22:59,132] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 14:24:57 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.88613232Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 14:24:57 kafka | [2024-04-25 14:22:59,132] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.886468375Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=333.775µs 14:24:57 kafka | [2024-04-25 14:22:59,132] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=migrator t=2024-04-25T14:22:31.891437793Z level=info msg="migrations completed" performed=548 skipped=0 duration=15.991739147s 14:24:57 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 14:24:57 kafka | [2024-04-25 14:22:59,132] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 14:24:57 grafana | logger=sqlstore t=2024-04-25T14:22:31.904799854Z level=info msg="Created default admin" user=admin 14:24:57 kafka | [2024-04-25 14:22:59,138] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 policy-pap | [2024-04-25T14:23:28.179+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T14:23:28Z, user=policyadmin)] 14:24:57 grafana | logger=sqlstore t=2024-04-25T14:22:31.904998867Z level=info msg="Created default organization" 14:24:57 kafka | [2024-04-25 14:22:59,139] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 14:24:57 policy-pap | [2024-04-25T14:23:48.798+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 14:24:57 grafana | logger=secrets t=2024-04-25T14:22:31.909885874Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 14:24:57 kafka | [2024-04-25 14:22:59,139] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 policy-pap | [2024-04-25T14:23:48.800+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 14:24:57 grafana | logger=plugin.store t=2024-04-25T14:22:31.929995737Z level=info msg="Loading plugins..." 14:24:57 kafka | [2024-04-25 14:22:59,139] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 policy-pap | [2024-04-25T14:23:49.504+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] 14:24:57 grafana | logger=local.finder t=2024-04-25T14:22:31.968964036Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 14:24:57 kafka | [2024-04-25 14:22:59,139] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 policy-pap | [2024-04-25T14:23:49.579+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] 14:24:57 grafana | logger=plugin.store t=2024-04-25T14:22:31.968992887Z level=info msg="Plugins loaded" count=55 duration=38.99622ms 14:24:57 kafka | [2024-04-25 14:22:59,145] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 grafana | logger=query_data t=2024-04-25T14:22:31.979781384Z level=info msg="Query Service initialization" 14:24:57 kafka | [2024-04-25 14:22:59,145] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 grafana | logger=live.push_http t=2024-04-25T14:22:31.98616887Z level=info msg="Live Push Gateway initialization" 14:24:57 kafka | [2024-04-25 14:22:59,145] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 14:24:57 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.022857438Z level=info msg=Starting 14:24:57 kafka | [2024-04-25 14:22:59,145] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.023620158Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 14:24:57 kafka | [2024-04-25 14:22:59,145] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T14:22:32.024346429Z level=info msg="Migrating alerts for organisation" 14:24:57 kafka | [2024-04-25 14:22:59,155] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T14:22:32.026051251Z level=info msg="Alerts found to migrate" alerts=0 14:24:57 kafka | [2024-04-25 14:22:59,156] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.028478494Z level=info msg="Completed alerting migration" 14:24:57 kafka | [2024-04-25 14:22:59,156] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.061878078Z level=info msg="Running in alternative execution of Error/NoData mode" 14:24:57 kafka | [2024-04-25 14:22:59,156] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=infra.usagestats.collector t=2024-04-25T14:22:32.063636933Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 14:24:57 kafka | [2024-04-25 14:22:59,156] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 14:24:57 grafana | logger=provisioning.datasources t=2024-04-25T14:22:32.066430991Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 14:24:57 kafka | [2024-04-25 14:22:59,172] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 grafana | logger=provisioning.alerting t=2024-04-25T14:22:32.083408981Z level=info msg="starting to provision alerting" 14:24:57 kafka | [2024-04-25 14:22:59,173] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 grafana | logger=provisioning.alerting t=2024-04-25T14:22:32.083427021Z level=info msg="finished to provision alerting" 14:24:57 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,174] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 14:24:57 grafana | logger=grafanaStorageLogger t=2024-04-25T14:22:32.084454745Z level=info msg="Storage starting" 14:24:57 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,174] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.084431745Z level=info msg="Warming state cache for startup" 14:24:57 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,174] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T14:22:32.086062747Z level=info msg="Starting MultiOrg Alertmanager" 14:24:57 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,186] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 grafana | logger=http.server t=2024-04-25T14:22:32.087643968Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 14:24:57 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,187] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 grafana | logger=provisioning.dashboard t=2024-04-25T14:22:32.152673122Z level=info msg="starting to provision dashboards" 14:24:57 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,187] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 14:24:57 grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.158436481Z level=info msg="State cache has been initialized" states=0 duration=74.003286ms 14:24:57 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,187] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=ngalert.scheduler t=2024-04-25T14:22:32.158499002Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 14:24:57 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,187] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 grafana | logger=ticker t=2024-04-25T14:22:32.158581623Z level=info msg=starting first_tick=2024-04-25T14:22:40Z 14:24:57 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,203] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 grafana | logger=plugins.update.checker t=2024-04-25T14:22:32.202277286Z level=info msg="Update check succeeded" duration=118.430689ms 14:24:57 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,205] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.236665473Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 14:24:57 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 14:24:57 kafka | [2024-04-25 14:22:59,205] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 14:24:57 grafana | logger=grafana.update.checker t=2024-04-25T14:22:32.237954551Z level=info msg="Update check succeeded" duration=154.199075ms 14:24:57 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,205] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.247269797Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 14:24:57 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,209] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.257998013Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 14:24:57 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,217] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.268916611Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 14:24:57 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,218] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.280835963Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 14:24:57 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,218] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 14:24:57 grafana | logger=plugin.signature.key_retriever t=2024-04-25T14:22:32.300190946Z level=error msg="Error downloading plugin manifest keys" error="kv set: database is locked" 14:24:57 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,218] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 grafana | logger=grafana-apiserver t=2024-04-25T14:22:32.337604154Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 14:24:57 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,218] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 grafana | logger=grafana-apiserver t=2024-04-25T14:22:32.338121862Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 14:24:57 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,230] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 grafana | logger=provisioning.dashboard t=2024-04-25T14:22:32.440436222Z level=info msg="finished to provision dashboards" 14:24:57 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,234] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 grafana | logger=infra.usagestats t=2024-04-25T14:23:33.097239688Z level=info msg="Usage stats are ready to report" 14:24:57 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,234] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,234] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,234] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,246] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,247] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,247] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,248] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:32 14:24:57 kafka | [2024-04-25 14:22:59,248] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,255] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,256] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,256] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,256] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,256] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,266] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,267] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 14:24:57 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241422171100u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,267] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,267] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,267] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,275] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 14:24:57 kafka | [2024-04-25 14:22:59,276] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 14:24:57 kafka | [2024-04-25 14:22:59,276] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 14:24:57 kafka | [2024-04-25 14:22:59,276] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 14:24:57 kafka | [2024-04-25 14:22:59,276] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 policy-db-migrator | policyadmin: OK @ 1300 14:24:57 kafka | [2024-04-25 14:22:59,283] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,284] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,284] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,284] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,284] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,294] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,295] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,295] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,295] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,295] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,301] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,302] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,302] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,302] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,302] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,359] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,360] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,360] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,360] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,360] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,367] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,368] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,368] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,368] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,368] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,380] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,381] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,381] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,381] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,381] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,450] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,451] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,451] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,451] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,451] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,458] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,458] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,458] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,458] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,458] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,467] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,467] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,467] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,467] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,467] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,477] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,477] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,478] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,478] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,478] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,488] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,489] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,489] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,489] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,489] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,502] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,503] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,503] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,503] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,503] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,510] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,511] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,511] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,511] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,511] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,522] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,523] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,523] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,523] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,524] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,533] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,536] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,536] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,536] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,536] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,545] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,545] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,545] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,546] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,546] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,558] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,559] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,559] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,559] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,559] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,570] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,570] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,570] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,571] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,571] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,577] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,578] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,578] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,578] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,583] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,594] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,595] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,595] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,595] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,595] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,610] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,611] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,611] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,611] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,612] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,621] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,622] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,622] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,622] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,622] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,633] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,633] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,633] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,633] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,634] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,641] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,641] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,641] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,641] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,642] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,684] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,685] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,685] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,685] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,685] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,693] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,694] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,694] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,694] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,694] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,703] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,703] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,703] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,703] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,704] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,711] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,711] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,712] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,712] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,712] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,722] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,723] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,723] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,723] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,723] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,733] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,735] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,735] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,735] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,736] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,747] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:24:57 kafka | [2024-04-25 14:22:59,748] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:24:57 kafka | [2024-04-25 14:22:59,748] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,748] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 14:24:57 kafka | [2024-04-25 14:22:59,748] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,800] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,803] INFO [Broker id=1] Finished LeaderAndIsr request in 775ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,805] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Z-ljZKLXR-y1QhXAaAKdbg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,809] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,813] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:24:57 kafka | [2024-04-25 14:22:59,863] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,878] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce in Empty state. Created a new member id consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,881] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,882] INFO [GroupCoordinator 1]: Preparing to rebalance group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,896] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b957469a-2969-4bff-8555-1bfe3e4d4da0 in Empty state. Created a new member id consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:22:59,902] INFO [GroupCoordinator 1]: Preparing to rebalance group b957469a-2969-4bff-8555-1bfe3e4d4da0 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,892] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,896] INFO [GroupCoordinator 1]: Stabilized group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,903] INFO [GroupCoordinator 1]: Stabilized group b957469a-2969-4bff-8555-1bfe3e4d4da0 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,926] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,928] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 for group b957469a-2969-4bff-8555-1bfe3e4d4da0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:24:57 kafka | [2024-04-25 14:23:02,928] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b for group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:24:57 ++ echo 'Tearing down containers...' 14:24:57 Tearing down containers... 14:24:57 ++ docker-compose down -v --remove-orphans 14:24:58 Stopping policy-apex-pdp ... 14:24:58 Stopping policy-pap ... 14:24:58 Stopping grafana ... 14:24:58 Stopping kafka ... 14:24:58 Stopping policy-api ... 14:24:58 Stopping mariadb ... 14:24:58 Stopping simulator ... 14:24:58 Stopping zookeeper ... 14:24:58 Stopping prometheus ... 14:24:59 Stopping grafana ... done 14:24:59 Stopping prometheus ... done 14:25:08 Stopping policy-apex-pdp ... done 14:25:19 Stopping simulator ... done 14:25:19 Stopping policy-pap ... done 14:25:20 Stopping mariadb ... done 14:25:20 Stopping kafka ... done 14:25:21 Stopping zookeeper ... done 14:25:29 Stopping policy-api ... done 14:25:29 Removing policy-apex-pdp ... 14:25:29 Removing policy-pap ... 14:25:29 Removing grafana ... 14:25:29 Removing kafka ... 14:25:29 Removing policy-api ... 14:25:29 Removing policy-db-migrator ... 14:25:29 Removing mariadb ... 14:25:29 Removing simulator ... 14:25:29 Removing zookeeper ... 14:25:29 Removing prometheus ... 14:25:29 Removing policy-pap ... done 14:25:29 Removing policy-api ... done 14:25:29 Removing policy-apex-pdp ... done 14:25:29 Removing policy-db-migrator ... done 14:25:29 Removing mariadb ... done 14:25:29 Removing prometheus ... done 14:25:29 Removing simulator ... done 14:25:29 Removing zookeeper ... done 14:25:29 Removing grafana ... done 14:25:29 Removing kafka ... done 14:25:29 Removing network compose_default 14:25:29 ++ cd /w/workspace/policy-pap-master-project-csit-pap 14:25:29 + load_set 14:25:29 + _setopts=hxB 14:25:29 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:25:29 ++ tr : ' ' 14:25:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:25:29 + set +o braceexpand 14:25:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:25:29 + set +o hashall 14:25:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:25:29 + set +o interactive-comments 14:25:29 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:25:29 + set +o xtrace 14:25:29 ++ echo hxB 14:25:29 ++ sed 's/./& /g' 14:25:29 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:25:29 + set +h 14:25:29 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:25:29 + set +x 14:25:29 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:25:29 + [[ -n /tmp/tmp.9uiB25C2Gx ]] 14:25:29 + rsync -av /tmp/tmp.9uiB25C2Gx/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:25:29 sending incremental file list 14:25:29 ./ 14:25:29 log.html 14:25:29 output.xml 14:25:29 report.html 14:25:29 testplan.txt 14:25:29 14:25:29 sent 918,987 bytes received 95 bytes 1,838,164.00 bytes/sec 14:25:29 total size is 918,445 speedup is 1.00 14:25:29 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 14:25:30 + exit 1 14:25:30 Build step 'Execute shell' marked build as failure 14:25:30 $ ssh-agent -k 14:25:30 unset SSH_AUTH_SOCK; 14:25:30 unset SSH_AGENT_PID; 14:25:30 echo Agent pid 2190 killed; 14:25:30 [ssh-agent] Stopped. 14:25:30 Robot results publisher started... 14:25:30 INFO: Checking test criticality is deprecated and will be dropped in a future release! 14:25:30 -Parsing output xml: 14:25:30 Done! 14:25:30 WARNING! Could not find file: **/log.html 14:25:30 WARNING! Could not find file: **/report.html 14:25:30 -Copying log files to build dir: 14:25:30 Done! 14:25:30 -Assigning results to build: 14:25:30 Done! 14:25:30 -Checking thresholds: 14:25:30 Done! 14:25:30 Done publishing Robot results. 14:25:30 [PostBuildScript] - [INFO] Executing post build scripts. 14:25:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11193931219247660130.sh 14:25:30 ---> sysstat.sh 14:25:31 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5124674265485507533.sh 14:25:31 ---> package-listing.sh 14:25:31 ++ tr '[:upper:]' '[:lower:]' 14:25:31 ++ facter osfamily 14:25:31 + OS_FAMILY=debian 14:25:31 + workspace=/w/workspace/policy-pap-master-project-csit-pap 14:25:31 + START_PACKAGES=/tmp/packages_start.txt 14:25:31 + END_PACKAGES=/tmp/packages_end.txt 14:25:31 + DIFF_PACKAGES=/tmp/packages_diff.txt 14:25:31 + PACKAGES=/tmp/packages_start.txt 14:25:31 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 14:25:31 + PACKAGES=/tmp/packages_end.txt 14:25:31 + case "${OS_FAMILY}" in 14:25:31 + dpkg -l 14:25:31 + grep '^ii' 14:25:31 + '[' -f /tmp/packages_start.txt ']' 14:25:31 + '[' -f /tmp/packages_end.txt ']' 14:25:31 + diff /tmp/packages_start.txt /tmp/packages_end.txt 14:25:31 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 14:25:31 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 14:25:31 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 14:25:31 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2245818625615984182.sh 14:25:31 ---> capture-instance-metadata.sh 14:25:31 Setup pyenv: 14:25:31 system 14:25:31 3.8.13 14:25:31 3.9.13 14:25:31 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:25:31 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv 14:25:33 lf-activate-venv(): INFO: Installing: lftools 14:25:43 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH 14:25:43 INFO: Running in OpenStack, capturing instance metadata 14:25:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3900029720496613243.sh 14:25:43 provisioning config files... 14:25:43 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config7657330470754527008tmp 14:25:43 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 14:25:43 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 14:25:43 [EnvInject] - Injecting environment variables from a build step. 14:25:43 [EnvInject] - Injecting as environment variables the properties content 14:25:43 SERVER_ID=logs 14:25:43 14:25:43 [EnvInject] - Variables injected successfully. 14:25:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7986511254064810390.sh 14:25:43 ---> create-netrc.sh 14:25:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16931165663722923543.sh 14:25:43 ---> python-tools-install.sh 14:25:43 Setup pyenv: 14:25:43 system 14:25:43 3.8.13 14:25:43 3.9.13 14:25:43 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:25:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv 14:25:45 lf-activate-venv(): INFO: Installing: lftools 14:25:54 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH 14:25:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9846893542837962052.sh 14:25:54 ---> sudo-logs.sh 14:25:54 Archiving 'sudo' log.. 14:25:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2816951711083229067.sh 14:25:54 ---> job-cost.sh 14:25:54 Setup pyenv: 14:25:54 system 14:25:54 3.8.13 14:25:54 3.9.13 14:25:54 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:25:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv 14:25:56 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 14:26:00 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH 14:26:00 INFO: No Stack... 14:26:01 INFO: Retrieving Pricing Info for: v3-standard-8 14:26:01 INFO: Archiving Costs 14:26:01 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins7023004629369347686.sh 14:26:01 ---> logs-deploy.sh 14:26:01 Setup pyenv: 14:26:01 system 14:26:01 3.8.13 14:26:01 3.9.13 14:26:01 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:26:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv 14:26:03 lf-activate-venv(): INFO: Installing: lftools 14:26:12 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH 14:26:12 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1663 14:26:12 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 14:26:13 Archives upload complete. 14:26:13 INFO: archiving logs to Nexus 14:26:14 ---> uname -a: 14:26:14 Linux prd-ubuntu1804-docker-8c-8g-27901 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 14:26:14 14:26:14 14:26:14 ---> lscpu: 14:26:14 Architecture: x86_64 14:26:14 CPU op-mode(s): 32-bit, 64-bit 14:26:14 Byte Order: Little Endian 14:26:14 CPU(s): 8 14:26:14 On-line CPU(s) list: 0-7 14:26:14 Thread(s) per core: 1 14:26:14 Core(s) per socket: 1 14:26:14 Socket(s): 8 14:26:14 NUMA node(s): 1 14:26:14 Vendor ID: AuthenticAMD 14:26:14 CPU family: 23 14:26:14 Model: 49 14:26:14 Model name: AMD EPYC-Rome Processor 14:26:14 Stepping: 0 14:26:14 CPU MHz: 2799.998 14:26:14 BogoMIPS: 5599.99 14:26:14 Virtualization: AMD-V 14:26:14 Hypervisor vendor: KVM 14:26:14 Virtualization type: full 14:26:14 L1d cache: 32K 14:26:14 L1i cache: 32K 14:26:14 L2 cache: 512K 14:26:14 L3 cache: 16384K 14:26:14 NUMA node0 CPU(s): 0-7 14:26:14 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 14:26:14 14:26:14 14:26:14 ---> nproc: 14:26:14 8 14:26:14 14:26:14 14:26:14 ---> df -h: 14:26:14 Filesystem Size Used Avail Use% Mounted on 14:26:14 udev 16G 0 16G 0% /dev 14:26:14 tmpfs 3.2G 708K 3.2G 1% /run 14:26:14 /dev/vda1 155G 14G 142G 9% / 14:26:14 tmpfs 16G 0 16G 0% /dev/shm 14:26:14 tmpfs 5.0M 0 5.0M 0% /run/lock 14:26:14 tmpfs 16G 0 16G 0% /sys/fs/cgroup 14:26:14 /dev/vda15 105M 4.4M 100M 5% /boot/efi 14:26:14 tmpfs 3.2G 0 3.2G 0% /run/user/1001 14:26:14 14:26:14 14:26:14 ---> free -m: 14:26:14 total used free shared buff/cache available 14:26:14 Mem: 32167 846 25375 0 5944 30864 14:26:14 Swap: 1023 0 1023 14:26:14 14:26:14 14:26:14 ---> ip addr: 14:26:14 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 14:26:14 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 14:26:14 inet 127.0.0.1/8 scope host lo 14:26:14 valid_lft forever preferred_lft forever 14:26:14 inet6 ::1/128 scope host 14:26:14 valid_lft forever preferred_lft forever 14:26:14 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 14:26:14 link/ether fa:16:3e:ff:97:78 brd ff:ff:ff:ff:ff:ff 14:26:14 inet 10.30.106.248/23 brd 10.30.107.255 scope global dynamic ens3 14:26:14 valid_lft 85801sec preferred_lft 85801sec 14:26:14 inet6 fe80::f816:3eff:feff:9778/64 scope link 14:26:14 valid_lft forever preferred_lft forever 14:26:14 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 14:26:14 link/ether 02:42:41:0d:b7:df brd ff:ff:ff:ff:ff:ff 14:26:14 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 14:26:14 valid_lft forever preferred_lft forever 14:26:14 14:26:14 14:26:14 ---> sar -b -r -n DEV: 14:26:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-27901) 04/25/24 _x86_64_ (8 CPU) 14:26:14 14:26:14 14:16:18 LINUX RESTART (8 CPU) 14:26:14 14:26:14 14:17:02 tps rtps wtps bread/s bwrtn/s 14:26:14 14:18:01 133.89 70.65 63.25 4604.34 39736.59 14:26:14 14:19:01 79.32 1.77 77.55 84.65 25312.85 14:26:14 14:20:01 83.10 23.18 59.92 2813.80 22807.80 14:26:14 14:21:01 115.58 0.43 115.15 55.19 63386.90 14:26:14 14:22:01 113.26 0.08 113.18 5.73 72517.11 14:26:14 14:23:01 337.23 11.95 325.28 764.54 40584.70 14:26:14 14:24:01 18.28 0.07 18.21 3.07 18722.86 14:26:14 14:25:01 22.55 0.05 22.50 10.53 19371.70 14:26:14 14:26:01 73.07 1.87 71.20 111.45 18701.10 14:26:14 Average: 108.43 12.12 96.31 932.66 35675.11 14:26:14 14:26:14 14:17:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:26:14 14:18:01 30188040 31652308 2751180 8.35 56528 1726700 1502836 4.42 919264 1557960 76556 14:26:14 14:19:01 30088552 31741596 2850668 8.65 75672 1882224 1376552 4.05 828984 1718592 79596 14:26:14 14:20:01 29740796 31715364 3198424 9.71 89688 2178036 1398044 4.11 905680 1962764 140644 14:26:14 14:21:01 27239376 31668108 5699844 17.30 128748 4485816 1428960 4.20 1009364 4223584 1007492 14:26:14 14:22:01 26056288 31668596 6882932 20.90 139396 5610104 1499360 4.41 1018812 5346424 374800 14:26:14 14:23:01 23856752 29632144 9082468 27.57 155700 5736540 8899232 26.18 3230800 5253548 1412 14:26:14 14:24:01 23874620 29651092 9064600 27.52 155912 5737104 8802172 25.90 3215064 5250800 224 14:26:14 14:25:01 23903936 29706548 9035284 27.43 156320 5765236 8051032 23.69 3174964 5265396 232 14:26:14 14:26:01 26049040 31668364 6890180 20.92 158124 5597468 1503704 4.42 1247004 5109548 1880 14:26:14 Average: 26777489 31011569 6161731 18.71 124010 4302136 3829099 11.27 1727771 3965402 186982 14:26:14 14:26:14 14:17:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:26:14 14:18:01 ens3 387.08 271.73 1464.46 62.31 0.00 0.00 0.00 0.00 14:26:14 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:18:01 lo 1.49 1.49 0.16 0.16 0.00 0.00 0.00 0.00 14:26:14 14:19:01 ens3 18.90 15.25 245.15 3.56 0.00 0.00 0.00 0.00 14:26:14 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:19:01 lo 0.93 0.93 0.10 0.10 0.00 0.00 0.00 0.00 14:26:14 14:20:01 ens3 57.52 46.68 684.05 7.42 0.00 0.00 0.00 0.00 14:26:14 14:20:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:20:01 lo 4.07 4.07 0.40 0.40 0.00 0.00 0.00 0.00 14:26:14 14:21:01 ens3 767.91 356.84 16981.76 25.16 0.00 0.00 0.00 0.00 14:26:14 14:21:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:21:01 lo 5.20 5.20 0.52 0.52 0.00 0.00 0.00 0.00 14:26:14 14:22:01 ens3 393.45 188.84 12395.49 13.72 0.00 0.00 0.00 0.00 14:26:14 14:22:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:22:01 lo 4.07 4.07 0.39 0.39 0.00 0.00 0.00 0.00 14:26:14 14:23:01 ens3 4.87 3.40 1.27 1.17 0.00 0.00 0.00 0.00 14:26:14 14:23:01 br-3695e8c45fd8 0.87 0.75 0.07 0.31 0.00 0.00 0.00 0.00 14:26:14 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:23:01 vethe349ec6 24.43 22.61 10.54 16.10 0.00 0.00 0.00 0.00 14:26:14 14:24:01 ens3 4.15 3.15 0.84 0.78 0.00 0.00 0.00 0.00 14:26:14 14:24:01 br-3695e8c45fd8 1.98 2.35 1.81 1.76 0.00 0.00 0.00 0.00 14:26:14 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:24:01 vethe349ec6 21.91 17.70 6.81 23.82 0.00 0.00 0.00 0.00 14:26:14 14:25:01 ens3 13.90 14.01 5.68 16.57 0.00 0.00 0.00 0.00 14:26:14 14:25:01 br-3695e8c45fd8 1.47 1.68 0.11 0.15 0.00 0.00 0.00 0.00 14:26:14 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:25:01 vethe349ec6 0.43 0.50 0.59 0.03 0.00 0.00 0.00 0.00 14:26:14 14:26:01 ens3 64.51 40.04 70.78 17.16 0.00 0.00 0.00 0.00 14:26:14 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 14:26:01 lo 35.13 35.13 6.23 6.23 0.00 0.00 0.00 0.00 14:26:14 Average: ens3 189.90 104.14 3542.56 16.35 0.00 0.00 0.00 0.00 14:26:14 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:14 Average: lo 3.53 3.53 0.66 0.66 0.00 0.00 0.00 0.00 14:26:14 14:26:14 14:26:14 ---> sar -P ALL: 14:26:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-27901) 04/25/24 _x86_64_ (8 CPU) 14:26:14 14:26:14 14:16:18 LINUX RESTART (8 CPU) 14:26:14 14:26:14 14:17:02 CPU %user %nice %system %iowait %steal %idle 14:26:14 14:18:01 all 8.53 0.00 0.99 6.50 0.04 83.94 14:26:14 14:18:01 0 5.10 0.00 0.73 0.49 0.02 93.66 14:26:14 14:18:01 1 4.39 0.00 0.56 0.58 0.05 94.42 14:26:14 14:18:01 2 4.97 0.00 0.73 31.20 0.03 63.06 14:26:14 14:18:01 3 7.82 0.00 1.00 1.80 0.03 89.34 14:26:14 14:18:01 4 3.04 0.00 0.85 11.82 0.05 84.24 14:26:14 14:18:01 5 2.51 0.00 0.96 0.48 0.03 96.03 14:26:14 14:18:01 6 18.74 0.00 1.53 4.26 0.05 75.42 14:26:14 14:18:01 7 21.71 0.00 1.57 1.40 0.05 75.27 14:26:14 14:19:01 all 6.66 0.00 0.43 6.70 0.03 86.18 14:26:14 14:19:01 0 5.98 0.00 0.33 0.62 0.00 93.07 14:26:14 14:19:01 1 7.28 0.00 0.40 3.77 0.02 88.54 14:26:14 14:19:01 2 8.79 0.00 0.79 25.97 0.08 64.37 14:26:14 14:19:01 3 10.29 0.00 0.37 2.54 0.02 86.79 14:26:14 14:19:01 4 14.32 0.00 1.02 1.82 0.02 82.82 14:26:14 14:19:01 5 0.17 0.00 0.02 0.00 0.05 99.77 14:26:14 14:19:01 6 1.00 0.00 0.23 15.77 0.02 82.97 14:26:14 14:19:01 7 5.44 0.00 0.32 3.29 0.02 90.93 14:26:14 14:20:01 all 6.38 0.00 0.65 7.06 0.03 85.89 14:26:14 14:20:01 0 9.72 0.00 1.04 0.77 0.03 88.43 14:26:14 14:20:01 1 0.87 0.00 0.25 10.23 0.05 88.60 14:26:14 14:20:01 2 23.77 0.00 1.29 12.64 0.05 62.25 14:26:14 14:20:01 3 7.66 0.00 0.83 1.27 0.02 90.22 14:26:14 14:20:01 4 2.82 0.00 0.45 0.37 0.02 96.35 14:26:14 14:20:01 5 2.08 0.00 0.47 25.03 0.02 72.41 14:26:14 14:20:01 6 1.99 0.00 0.50 4.98 0.03 92.50 14:26:14 14:20:01 7 2.43 0.00 0.37 1.24 0.02 95.95 14:26:14 14:21:01 all 8.17 0.00 3.61 11.90 0.05 76.27 14:26:14 14:21:01 0 7.79 0.00 4.04 0.03 0.05 88.08 14:26:14 14:21:01 1 8.34 0.00 4.10 12.82 0.05 74.69 14:26:14 14:21:01 2 8.16 0.00 3.27 32.82 0.05 55.69 14:26:14 14:21:01 3 9.48 0.00 3.93 2.52 0.03 84.04 14:26:14 14:21:01 4 7.30 0.00 2.63 1.05 0.03 88.99 14:26:14 14:21:01 5 7.89 0.00 3.08 1.48 0.07 87.48 14:26:14 14:21:01 6 8.91 0.00 3.09 0.19 0.05 87.76 14:26:14 14:21:01 7 7.49 0.00 4.72 44.26 0.08 43.44 14:26:14 14:22:01 all 4.51 0.00 1.99 11.36 0.05 82.10 14:26:14 14:22:01 0 4.16 0.00 2.49 1.29 0.12 91.95 14:26:14 14:22:01 1 4.91 0.00 1.74 7.66 0.03 85.66 14:26:14 14:22:01 2 4.44 0.00 2.13 17.29 0.03 76.11 14:26:14 14:22:01 3 4.77 0.00 2.18 1.04 0.05 91.96 14:26:14 14:22:01 4 4.75 0.00 1.68 0.08 0.05 93.44 14:26:14 14:22:01 5 2.74 0.00 1.81 0.44 0.02 94.99 14:26:14 14:22:01 6 5.16 0.00 1.74 0.55 0.03 92.51 14:26:14 14:22:01 7 5.16 0.00 2.14 62.77 0.05 29.88 14:26:14 14:23:01 all 24.95 0.00 3.56 8.67 0.09 62.74 14:26:14 14:23:01 0 28.64 0.00 3.79 8.13 0.10 59.34 14:26:14 14:23:01 1 25.78 0.00 3.54 23.95 0.08 46.64 14:26:14 14:23:01 2 30.17 0.00 4.14 7.21 0.08 58.40 14:26:14 14:23:01 3 24.05 0.00 3.63 6.05 0.07 66.20 14:26:14 14:23:01 4 27.73 0.00 3.61 4.96 0.08 63.61 14:26:14 14:23:01 5 16.98 0.00 2.42 11.90 0.10 68.61 14:26:14 14:23:01 6 22.77 0.00 3.65 2.18 0.08 71.32 14:26:14 14:23:01 7 23.58 0.00 3.67 4.95 0.10 67.69 14:26:14 14:24:01 all 6.64 0.00 0.61 1.20 0.06 91.49 14:26:14 14:24:01 0 5.73 0.00 0.60 0.00 0.03 93.64 14:26:14 14:24:01 1 6.79 0.00 0.52 0.00 0.03 92.66 14:26:14 14:24:01 2 6.50 0.00 0.65 0.05 0.05 92.75 14:26:14 14:24:01 3 7.87 0.00 0.63 0.00 0.07 91.43 14:26:14 14:24:01 4 6.37 0.00 0.60 9.42 0.07 83.54 14:26:14 14:24:01 5 5.89 0.00 0.43 0.08 0.07 93.54 14:26:14 14:24:01 6 6.93 0.00 0.68 0.02 0.05 92.32 14:26:14 14:24:01 7 7.05 0.00 0.79 0.02 0.08 92.07 14:26:14 14:25:01 all 1.51 0.00 0.30 1.52 0.06 96.61 14:26:14 14:25:01 0 1.85 0.00 0.30 0.00 0.05 97.80 14:26:14 14:25:01 1 0.82 0.00 0.32 0.60 0.05 98.22 14:26:14 14:25:01 2 0.52 0.00 0.22 0.02 0.03 99.22 14:26:14 14:25:01 3 1.44 0.00 0.32 0.12 0.07 98.06 14:26:14 14:25:01 4 1.10 0.00 0.30 11.20 0.05 87.34 14:26:14 14:25:01 5 3.12 0.00 0.21 0.10 0.05 96.51 14:26:14 14:25:01 6 1.09 0.00 0.35 0.00 0.05 98.51 14:26:14 14:25:01 7 2.14 0.00 0.40 0.13 0.10 97.22 14:26:14 14:26:01 all 5.98 0.00 0.55 1.95 0.03 91.49 14:26:14 14:26:01 0 3.42 0.00 0.45 0.17 0.02 95.94 14:26:14 14:26:01 1 2.17 0.00 0.47 0.47 0.02 96.88 14:26:14 14:26:01 2 3.69 0.00 0.62 0.27 0.03 95.39 14:26:14 14:26:01 3 7.94 0.00 0.55 0.23 0.03 91.24 14:26:14 14:26:01 4 0.65 0.00 0.45 12.58 0.07 86.25 14:26:14 14:26:01 5 15.72 0.00 0.77 0.60 0.05 82.86 14:26:14 14:26:01 6 1.08 0.00 0.45 0.17 0.02 98.28 14:26:14 14:26:01 7 13.21 0.00 0.65 1.12 0.03 84.98 14:26:14 Average: all 8.14 0.00 1.41 6.30 0.05 84.10 14:26:14 Average: 0 8.04 0.00 1.53 1.28 0.05 89.11 14:26:14 Average: 1 6.80 0.00 1.32 6.66 0.04 85.19 14:26:14 Average: 2 10.09 0.00 1.54 14.10 0.05 74.23 14:26:14 Average: 3 9.02 0.00 1.49 1.72 0.04 87.73 14:26:14 Average: 4 7.56 0.00 1.29 5.92 0.05 85.19 14:26:14 Average: 5 6.36 0.00 1.13 4.46 0.05 88.00 14:26:14 Average: 6 7.48 0.00 1.35 3.13 0.04 87.99 14:26:14 Average: 7 9.77 0.00 1.62 13.22 0.06 75.33 14:26:14 14:26:14 14:26:14