Started by upstream project "policy-pap-master-merge-java" build number 351 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137761 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-27901 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-sjmXBe4BcMzm/agent.2189 SSH_AGENT_PID=2190 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9080305929047286160.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9080305929047286160.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1160451521055426948.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-cpZN lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-cpZN/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.91 botocore==1.34.91 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins6333459876021950769.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17700834290198915127.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.OTxYuIrIme ++ echo ROBOT_VENV=/tmp/tmp.OTxYuIrIme +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.OTxYuIrIme ++ source /tmp/tmp.OTxYuIrIme/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.OTxYuIrIme +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.OTxYuIrIme/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.OTxYuIrIme) ' '!=' x ']' +++ PS1='(tmp.OTxYuIrIme) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.OTxYuIrIme/src/onap ++ rm -rf /tmp/tmp.OTxYuIrIme/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.OTxYuIrIme/bin/activate + '[' -z /tmp/tmp.OTxYuIrIme/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.OTxYuIrIme/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.OTxYuIrIme ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.OTxYuIrIme/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.OTxYuIrIme) ' ++ '[' 'x(tmp.OTxYuIrIme) ' '!=' x ']' ++ PS1='(tmp.OTxYuIrIme) (tmp.OTxYuIrIme) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.9uiB25C2Gx + cd /tmp/tmp.9uiB25C2Gx + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:73236d56a7796996901511a1cb6c2fe3204e974356a78c9761a399b0c362efb6 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:a6a581513619dfb88af12cb5f913059ca149fe42561b778b38baf001f8cfe10c Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:15db3ed25bc2c5fcac7635cebf8ee909afbd4fd846efff231410c6f1346614e7 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating prometheus ... Creating zookeeper ... Creating simulator ... Creating mariadb ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating simulator ... done Creating zookeeper ... done Creating kafka ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds grafana Up 13 seconds kafka Up 12 seconds policy-api Up 18 seconds policy-db-migrator Up 19 seconds mariadb Up 20 seconds simulator Up 17 seconds zookeeper Up 15 seconds prometheus Up 14 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds grafana Up 18 seconds kafka Up 17 seconds policy-api Up 23 seconds policy-db-migrator Up 24 seconds mariadb Up 25 seconds simulator Up 22 seconds zookeeper Up 21 seconds prometheus Up 19 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds grafana Up 23 seconds kafka Up 22 seconds policy-api Up 28 seconds mariadb Up 30 seconds simulator Up 27 seconds zookeeper Up 26 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds grafana Up 28 seconds kafka Up 27 seconds policy-api Up 33 seconds mariadb Up 35 seconds simulator Up 32 seconds zookeeper Up 31 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds grafana Up 33 seconds kafka Up 32 seconds policy-api Up 38 seconds mariadb Up 40 seconds simulator Up 37 seconds zookeeper Up 36 seconds prometheus Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds grafana Up 38 seconds kafka Up 37 seconds policy-api Up 43 seconds mariadb Up 45 seconds simulator Up 42 seconds zookeeper Up 41 seconds prometheus Up 40 seconds NAMES STATUS policy-apex-pdp Up 40 seconds policy-pap Up 41 seconds grafana Up 43 seconds kafka Up 42 seconds policy-api Up 48 seconds mariadb Up 50 seconds simulator Up 47 seconds zookeeper Up 46 seconds prometheus Up 45 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 14:22:59 up 6 min, 0 users, load average: 3.58, 2.07, 0.92 Tasks: 202 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 8.8 us, 1.9 sy, 0.0 ni, 80.5 id, 8.8 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 40 seconds policy-pap Up 42 seconds grafana Up 44 seconds kafka Up 43 seconds policy-api Up 48 seconds mariadb Up 50 seconds simulator Up 47 seconds zookeeper Up 46 seconds prometheus Up 45 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 1df5e839969b policy-apex-pdp 1.56% 172.3MiB / 31.41GiB 0.54% 7.8kB / 7.61kB 0B / 0B 48 9cc97d748771 policy-pap 3.01% 559.9MiB / 31.41GiB 1.74% 34.2kB / 35.6kB 0B / 149MB 62 0d922879188a grafana 0.03% 54.15MiB / 31.41GiB 0.17% 18.5kB / 3.18kB 0B / 24.8MB 19 09cfce7a987a kafka 0.49% 390.6MiB / 31.41GiB 1.21% 70.2kB / 73.8kB 0B / 508kB 84 7f3faa87ecf9 policy-api 0.11% 465MiB / 31.41GiB 1.45% 989kB / 648kB 0B / 0B 52 bd0e7e07829a mariadb 0.03% 102.4MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 67.9MB 36 259d80ebd636 simulator 0.07% 122.7MiB / 31.41GiB 0.38% 1.27kB / 0B 98.3kB / 0B 76 db21b226f583 zookeeper 0.10% 99.86MiB / 31.41GiB 0.31% 54.5kB / 47.7kB 0B / 389kB 60 742612bf9a64 prometheus 0.00% 19.14MiB / 31.41GiB 0.06% 1.52kB / 432B 0B / 0B 13 + echo + cd /tmp/tmp.9uiB25C2Gx + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | pdpTypeC != pdpTypeA ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.9uiB25C2Gx/output.xml Log: /tmp/tmp.9uiB25C2Gx/log.html Report: /tmp/tmp.9uiB25C2Gx/report.html + RESULT=1 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes zookeeper Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 14:24:49 up 8 min, 0 users, load average: 0.94, 1.65, 0.90 Tasks: 200 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 7.8 us, 1.6 sy, 0.0 ni, 83.3 id, 7.2 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes zookeeper Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 1df5e839969b policy-apex-pdp 1.12% 177.9MiB / 31.41GiB 0.55% 55.9kB / 79.8kB 0B / 0B 52 9cc97d748771 policy-pap 0.54% 472.9MiB / 31.41GiB 1.47% 2.47MB / 1.05MB 0B / 149MB 66 0d922879188a grafana 0.03% 55.11MiB / 31.41GiB 0.17% 21.7kB / 4.6kB 0B / 24.9MB 19 09cfce7a987a kafka 1.01% 392.1MiB / 31.41GiB 1.22% 237kB / 214kB 0B / 606kB 85 7f3faa87ecf9 policy-api 0.11% 516.5MiB / 31.41GiB 1.61% 2.45MB / 1.1MB 0B / 0B 55 bd0e7e07829a mariadb 0.02% 103.6MiB / 31.41GiB 0.32% 2.02MB / 4.88MB 11MB / 68.1MB 27 259d80ebd636 simulator 0.07% 122.9MiB / 31.41GiB 0.38% 1.5kB / 0B 98.3kB / 0B 78 db21b226f583 zookeeper 0.08% 98.07MiB / 31.41GiB 0.30% 57.3kB / 49.2kB 0B / 389kB 60 742612bf9a64 prometheus 0.00% 25.72MiB / 31.41GiB 0.08% 180kB / 10.3kB 0B / 0B 14 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, mariadb, simulator, zookeeper, prometheus grafana | logger=settings t=2024-04-25T14:22:15.897653053Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T14:22:15Z grafana | logger=settings t=2024-04-25T14:22:15.897865477Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-25T14:22:15.897876017Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-25T14:22:15.897879567Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-25T14:22:15.897884117Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-25T14:22:15.897887547Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T14:22:15.897890817Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T14:22:15.897894087Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-25T14:22:15.897897567Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-25T14:22:15.897901187Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-25T14:22:15.897904427Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T14:22:15.897908217Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T14:22:15.897912627Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-25T14:22:15.897925028Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-25T14:22:15.897927958Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-25T14:22:15.897930838Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-25T14:22:15.897933788Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-25T14:22:15.897936948Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-25T14:22:15.897940518Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-25T14:22:15.898212702Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-25T14:22:15.898232752Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-25T14:22:15.898822641Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-25T14:22:15.899727076Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-25T14:22:15.900544379Z level=info msg="Migration successfully executed" id="create migration_log table" duration=816.943µs grafana | logger=migrator t=2024-04-25T14:22:15.904687174Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-25T14:22:15.905256583Z level=info msg="Migration successfully executed" id="create user table" duration=566.169µs grafana | logger=migrator t=2024-04-25T14:22:15.908645836Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-25T14:22:15.909398708Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=749.501µs grafana | logger=migrator t=2024-04-25T14:22:15.917371893Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-25T14:22:15.91848018Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.107907ms grafana | logger=migrator t=2024-04-25T14:22:15.923206914Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-25T14:22:15.92422418Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.017166ms grafana | logger=migrator t=2024-04-25T14:22:15.929770957Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-25 14:22:09+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-04-25 14:22:09 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-25 14:22:09 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-25 14:22:09 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-04-25 14:22:11+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Number of transaction pools: 1 grafana | logger=migrator t=2024-04-25T14:22:15.930383138Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=608.201µs grafana | logger=migrator t=2024-04-25T14:22:15.939269887Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-25T14:22:15.943112687Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.84171ms grafana | logger=migrator t=2024-04-25T14:22:15.94839054Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-25T14:22:15.949187022Z level=info msg="Migration successfully executed" id="create user table v2" duration=796.122µs grafana | logger=migrator t=2024-04-25T14:22:15.952346612Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-25T14:22:15.95346196Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.115159ms grafana | logger=migrator t=2024-04-25T14:22:15.959707257Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-25T14:22:15.960810064Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.102427ms grafana | logger=migrator t=2024-04-25T14:22:15.965488108Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-25T14:22:15.965881364Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=393.326µs grafana | logger=migrator t=2024-04-25T14:22:15.97005605Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-25T14:22:15.970781251Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=720.971µs grafana | logger=migrator t=2024-04-25T14:22:15.977684149Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-25T14:22:15.979522348Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.842139ms grafana | logger=migrator t=2024-04-25T14:22:15.985893858Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-25T14:22:15.985919928Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.81µs grafana | logger=migrator t=2024-04-25T14:22:15.990776215Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-25T14:22:15.992456182Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.680297ms grafana | logger=migrator t=2024-04-25T14:22:15.998134041Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-25T14:22:15.998369705Z level=info msg="Migration successfully executed" id="Add missing user data" duration=236.064µs grafana | logger=migrator t=2024-04-25T14:22:16.064084864Z level=info msg="Executing migration" id="Add is_disabled column to user" mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-04-25 14:22:11 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-25 14:22:11 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-25 14:22:11 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-04-25 14:22:11 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-25 14:22:11 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-25 14:22:11 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-25 14:22:11 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-25 14:22:11 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-04-25 14:22:12+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | mariadb | 2024-04-25 14:22:14+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | 2024-04-25 14:22:14+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 14:22:15 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Shutdown completed; log sequence number 328053; transaction id 298 mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-04-25 14:22:15+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-25 14:22:15 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-04-25 14:22:15 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-25 14:22:15 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-25 14:22:15 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: log sequence number 328053; transaction id 299 mariadb | 2024-04-25 14:22:16 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-04-25 14:22:16 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-25 14:22:16 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-04-25 14:22:16 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-04-25 14:22:16 0 [Note] Server socket created on IP: '::'. mariadb | 2024-04-25 14:22:16 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-04-25 14:22:16 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 14:22:16 mariadb | 2024-04-25 14:22:17 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-04-25 14:22:17 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-04-25 14:22:17 41 [Warning] Aborted connection 41 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-04-25 14:22:19 58 [Warning] Aborted connection 58 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-25 14:22:20,970] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,970] INFO Client environment:host.name=09cfce7a987a (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,970] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,971] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,974] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:20,977] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-25 14:22:20,980] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-25 14:22:20,986] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:21,006] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:21,007] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:21,015] INFO Socket connection established, initiating session, client: /172.17.0.8:53854, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:21,114] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000005a2800000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:21,252] INFO Session: 0x1000005a2800000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:21,252] INFO EventThread shut down for session: 0x1000005a2800000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-04-25 14:22:22,028] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-25 14:22:22,349] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) grafana | logger=migrator t=2024-04-25T14:22:16.065881588Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.796934ms grafana | logger=migrator t=2024-04-25T14:22:16.073115232Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-25T14:22:16.073816991Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=701.559µs grafana | logger=migrator t=2024-04-25T14:22:16.078256938Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-25T14:22:16.080014211Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.765173ms grafana | logger=migrator t=2024-04-25T14:22:16.08450312Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-25T14:22:16.098214547Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.704117ms grafana | logger=migrator t=2024-04-25T14:22:16.102665305Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-25T14:22:16.103719398Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.053663ms grafana | logger=migrator t=2024-04-25T14:22:16.231538767Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-25T14:22:16.231997293Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=372.325µs grafana | logger=migrator t=2024-04-25T14:22:16.24711217Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-25T14:22:16.248234844Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.149465ms grafana | logger=migrator t=2024-04-25T14:22:16.259712693Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-25T14:22:16.260357211Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=643.718µs grafana | logger=migrator t=2024-04-25T14:22:16.268897511Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-25T14:22:16.270145488Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.247937ms grafana | logger=migrator t=2024-04-25T14:22:16.273858616Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-25T14:22:16.274950241Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.091465ms grafana | logger=migrator t=2024-04-25T14:22:16.279906115Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-25T14:22:16.280741296Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=836.421µs grafana | logger=migrator t=2024-04-25T14:22:16.286694003Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-25T14:22:16.287367722Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=673.569µs grafana | logger=migrator t=2024-04-25T14:22:16.290540833Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-25T14:22:16.291719038Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.177986ms grafana | logger=migrator t=2024-04-25T14:22:16.295054912Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-25T14:22:16.295087232Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=33.65µs grafana | logger=migrator t=2024-04-25T14:22:16.301594706Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-25T14:22:16.302221534Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=626.998µs grafana | logger=migrator t=2024-04-25T14:22:16.308871571Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-25T14:22:16.309647791Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=774.71µs grafana | logger=migrator t=2024-04-25T14:22:16.314720407Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-25T14:22:16.316028093Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.308506ms grafana | logger=migrator t=2024-04-25T14:22:16.32113311Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-25T14:22:16.322602599Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.468969ms grafana | logger=migrator t=2024-04-25T14:22:16.326716922Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T14:22:16.330520551Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.803929ms grafana | logger=migrator t=2024-04-25T14:22:16.33422371Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-25T14:22:16.335116181Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=891.801µs grafana | logger=migrator t=2024-04-25T14:22:16.339808583Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-25T14:22:16.340641053Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=832µs grafana | logger=migrator t=2024-04-25T14:22:16.34428319Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-25T14:22:16.345097921Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=814.421µs grafana | logger=migrator t=2024-04-25T14:22:16.348663587Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-25T14:22:16.350383399Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.719482ms grafana | logger=migrator t=2024-04-25T14:22:16.354690535Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-25T14:22:16.355516376Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=825.781µs grafana | logger=migrator t=2024-04-25T14:22:16.360372949Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-25T14:22:16.360782684Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=409.695µs grafana | logger=migrator t=2024-04-25T14:22:16.364179118Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-25T14:22:16.3650242Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=844.582µs grafana | logger=migrator t=2024-04-25T14:22:16.368403093Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-04-25T14:22:16.368981261Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=578.348µs grafana | logger=migrator t=2024-04-25T14:22:16.373914075Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-25T14:22:16.374664494Z level=info msg="Migration successfully executed" id="create star table" duration=747.539µs grafana | logger=migrator t=2024-04-25T14:22:16.378606046Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-25T14:22:16.379467337Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=860.741µs grafana | logger=migrator t=2024-04-25T14:22:16.38279508Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-25T14:22:16.38361111Z level=info msg="Migration successfully executed" id="create org table v1" duration=814.5µs grafana | logger=migrator t=2024-04-25T14:22:16.386924144Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-25T14:22:16.387682614Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=757.89µs grafana | logger=migrator t=2024-04-25T14:22:16.392306223Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-25T14:22:16.393056253Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=750.02µs grafana | logger=migrator t=2024-04-25T14:22:16.396472047Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-25T14:22:16.397310128Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=836.831µs grafana | logger=migrator t=2024-04-25T14:22:16.4006064Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-25T14:22:16.402121181Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.51384ms grafana | logger=migrator t=2024-04-25T14:22:16.405798608Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-25T14:22:16.406571469Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=772.641µs grafana | logger=migrator t=2024-04-25T14:22:16.411091867Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-25T14:22:16.411136607Z level=info msg="Migration successfully executed" id="Update org table charset" duration=43.7µs grafana | logger=migrator t=2024-04-25T14:22:16.415476684Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-25T14:22:16.415518685Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=43.111µs grafana | logger=migrator t=2024-04-25T14:22:16.418806127Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-25T14:22:16.419064111Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=258.274µs grafana | logger=migrator t=2024-04-25T14:22:16.422617996Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-25T14:22:16.423801862Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.183376ms grafana | logger=migrator t=2024-04-25T14:22:16.428261739Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-25T14:22:16.429341074Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.079185ms grafana | logger=migrator t=2024-04-25T14:22:16.433016011Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-25T14:22:16.434313808Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.297587ms grafana | logger=migrator t=2024-04-25T14:22:16.438794896Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-25T14:22:16.43986738Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.072024ms grafana | logger=migrator t=2024-04-25T14:22:16.444614042Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-25T14:22:16.445368272Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=753.669µs grafana | logger=migrator t=2024-04-25T14:22:16.448829077Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-25T14:22:16.44984662Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.017273ms grafana | logger=migrator t=2024-04-25T14:22:16.507886343Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-25T14:22:16.517474537Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.586404ms grafana | logger=migrator t=2024-04-25T14:22:16.522977209Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-25T14:22:16.524412068Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.433659ms grafana | logger=migrator t=2024-04-25T14:22:16.52848311Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-25T14:22:16.529332241Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=848.651µs grafana | logger=migrator t=2024-04-25T14:22:16.533409414Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-25T14:22:16.534543698Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.134854ms grafana | logger=migrator t=2024-04-25T14:22:16.539765276Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-25T14:22:16.540124591Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=363.285µs grafana | logger=migrator t=2024-04-25T14:22:16.543746819Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-04-25T14:22:16.544876533Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.128164ms grafana | logger=migrator t=2024-04-25T14:22:16.549706686Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-04-25T14:22:16.549842138Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=135.952µs grafana | logger=migrator t=2024-04-25T14:22:16.553554416Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T14:22:16.555519611Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.964335ms grafana | logger=migrator t=2024-04-25T14:22:16.559529153Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T14:22:16.562631863Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.10183ms grafana | logger=migrator t=2024-04-25T14:22:16.567497696Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.570408514Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.910018ms grafana | logger=migrator t=2024-04-25T14:22:16.575954516Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.576894788Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=940.762µs grafana | logger=migrator t=2024-04-25T14:22:16.581139934Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.583020069Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.879714ms grafana | logger=migrator t=2024-04-25T14:22:16.587397305Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.588357247Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=959.552µs grafana | logger=migrator t=2024-04-25T14:22:16.592621542Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-04-25T14:22:16.593430703Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=808.851µs grafana | logger=migrator t=2024-04-25T14:22:16.597477976Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-04-25T14:22:16.597505336Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.3µs grafana | logger=migrator t=2024-04-25T14:22:16.601447277Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-04-25T14:22:16.601474227Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.01µs grafana | logger=migrator t=2024-04-25T14:22:16.606192269Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.609317279Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.12445ms grafana | logger=migrator t=2024-04-25T14:22:16.612950926Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.615342097Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.376441ms grafana | logger=migrator t=2024-04-25T14:22:16.620380273Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.622401079Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.020156ms grafana | logger=migrator t=2024-04-25T14:22:16.626907457Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.628935834Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.027587ms grafana | logger=migrator t=2024-04-25T14:22:16.632146435Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.632392718Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=246.723µs grafana | logger=migrator t=2024-04-25T14:22:16.635518819Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-04-25T14:22:16.63631217Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=793.181µs grafana | logger=migrator t=2024-04-25T14:22:16.639696733Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-04-25T14:22:16.641352674Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.656091ms grafana | logger=migrator t=2024-04-25T14:22:16.646011955Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-04-25T14:22:16.646049426Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.35µs grafana | logger=migrator t=2024-04-25T14:22:16.650054657Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-04-25T14:22:16.650908099Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=853.222µs grafana | logger=migrator t=2024-04-25T14:22:16.653939898Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-04-25T14:22:16.654677648Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=737.11µs grafana | logger=migrator t=2024-04-25T14:22:16.659040945Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T14:22:16.664538826Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.497421ms grafana | logger=migrator t=2024-04-25T14:22:16.668370395Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-04-25T14:22:16.669114925Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=745.179µs grafana | logger=migrator t=2024-04-25T14:22:16.672280536Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-04-25T14:22:16.673087127Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=806.422µs grafana | logger=migrator t=2024-04-25T14:22:16.678284144Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-04-25T14:22:16.679050254Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=765.99µs grafana | logger=migrator t=2024-04-25T14:22:16.683475171Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-04-25T14:22:16.683768226Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=290.965µs grafana | logger=migrator t=2024-04-25T14:22:16.686568462Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-04-25T14:22:16.687110559Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=541.987µs grafana | logger=migrator t=2024-04-25T14:22:16.691595027Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-04-25T14:22:16.693757765Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.162308ms grafana | logger=migrator t=2024-04-25T14:22:16.698940892Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-04-25T14:22:16.699746702Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=805.36µs policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-04-25T14:22:36.612+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-04-25T14:22:36.673+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 34 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-04-25T14:22:36.674+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-04-25T14:22:38.523+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-04-25T14:22:38.600+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 68 ms. Found 6 JPA repository interfaces. policy-api | [2024-04-25T14:22:38.992+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-25T14:22:38.992+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-25T14:22:39.634+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-04-25T14:22:39.644+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-25T14:22:39.646+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-04-25T14:22:39.646+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-04-25T14:22:39.743+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-04-25T14:22:39.744+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3004 ms policy-api | [2024-04-25T14:22:40.170+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-04-25T14:22:40.248+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-04-25T14:22:40.301+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-04-25T14:22:40.600+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-04-25T14:22:40.630+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-04-25T14:22:40.742+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7718a40f policy-api | [2024-04-25T14:22:40.744+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-04-25T14:22:42.831+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-04-25T14:22:42.835+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-04-25T14:22:43.798+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-04-25T14:22:44.638+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-04-25T14:22:45.734+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-04-25T14:22:45.953+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@9b43134, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1ae2b0d0, org.springframework.security.web.context.SecurityContextHolderFilter@1e4cf0e5, org.springframework.security.web.header.HeaderWriterFilter@7f930614, org.springframework.security.web.authentication.logout.LogoutFilter@72e6e93, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1aef48f0, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@12919b7b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3033e54c, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@631c244c, org.springframework.security.web.access.ExceptionTranslationFilter@7d6d93f9, org.springframework.security.web.access.intercept.AuthorizationFilter@750190d0] policy-api | [2024-04-25T14:22:46.770+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-04-25T14:22:46.860+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-25T14:22:46.887+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-04-25T14:22:46.909+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.235 seconds (process running for 11.834) policy-api | [2024-04-25T14:23:02.778+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-04-25T14:23:02.778+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-04-25T14:23:02.779+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-04-25T14:23:03.143+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:16.703279568Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-04-25T14:22:16.703479271Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=199.573µs grafana | logger=migrator t=2024-04-25T14:22:16.707794257Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-04-25T14:22:16.707959049Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=164.632µs grafana | logger=migrator t=2024-04-25T14:22:16.712287176Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-04-25T14:22:16.713074846Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=786.139µs grafana | logger=migrator t=2024-04-25T14:22:16.717502763Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-04-25T14:22:16.71961885Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.115557ms grafana | logger=migrator t=2024-04-25T14:22:16.723089726Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-04-25T14:22:16.723993807Z level=info msg="Migration successfully executed" id="create data_source table" duration=904.541µs grafana | logger=migrator t=2024-04-25T14:22:16.783046213Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-04-25T14:22:16.784047626Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.004433ms grafana | logger=migrator t=2024-04-25T14:22:16.816503568Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-04-25T14:22:16.817274897Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=773.98µs grafana | logger=migrator t=2024-04-25T14:22:16.822269552Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-04-25T14:22:16.82290926Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=639.818µs grafana | logger=migrator t=2024-04-25T14:22:16.828752596Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-04-25T14:22:16.830112934Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.357958ms grafana | logger=migrator t=2024-04-25T14:22:16.837243276Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-04-25T14:22:16.845410332Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.167506ms grafana | logger=migrator t=2024-04-25T14:22:16.85213011Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-04-25T14:22:16.853148883Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.018323ms grafana | logger=migrator t=2024-04-25T14:22:16.857188565Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-04-25T14:22:16.858158878Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=970.173µs grafana | logger=migrator t=2024-04-25T14:22:16.972147616Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-04-25T14:22:16.973350532Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.202396ms grafana | logger=migrator t=2024-04-25T14:22:16.979738555Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-04-25T14:22:16.980421484Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=682.949µs grafana | logger=migrator t=2024-04-25T14:22:16.984775671Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-04-25T14:22:16.987601518Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.825207ms grafana | logger=migrator t=2024-04-25T14:22:16.994870772Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-04-25T14:22:16.997410404Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.538812ms grafana | logger=migrator t=2024-04-25T14:22:17.006774287Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-04-25T14:22:17.006918739Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=145.392µs grafana | logger=migrator t=2024-04-25T14:22:17.010634547Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-04-25T14:22:17.010973241Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=337.394µs grafana | logger=migrator t=2024-04-25T14:22:17.015242508Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-04-25T14:22:17.019459582Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.214914ms grafana | logger=migrator t=2024-04-25T14:22:17.026407804Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-04-25T14:22:17.026707028Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=300.064µs grafana | logger=migrator t=2024-04-25T14:22:17.034050534Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-04-25T14:22:17.034298187Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=249.933µs grafana | logger=migrator t=2024-04-25T14:22:17.03836577Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-04-25T14:22:17.044342668Z level=info msg="Migration successfully executed" id="Add uid column" duration=5.973828ms grafana | logger=migrator t=2024-04-25T14:22:17.048920299Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-04-25T14:22:17.049213712Z level=info msg="Migration successfully executed" id="Update uid value" duration=296.423µs grafana | logger=migrator t=2024-04-25T14:22:17.058755157Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-04-25T14:22:17.05966357Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=908.473µs grafana | logger=migrator t=2024-04-25T14:22:17.065639787Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-04-25T14:22:17.066401267Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=760.61µs grafana | logger=migrator t=2024-04-25T14:22:17.072889872Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-04-25T14:22:17.073709473Z level=info msg="Migration successfully executed" id="create api_key table" duration=819.211µs grafana | logger=migrator t=2024-04-25T14:22:17.078985013Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-04-25T14:22:17.08034339Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.358587ms kafka | [2024-04-25 14:22:22,416] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-25 14:22:22,417] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-25 14:22:22,418] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-25 14:22:22,430] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 14:22:22,434] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:host.name=09cfce7a987a (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-25T14:22:17.087648906Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-04-25T14:22:17.088981253Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.332007ms grafana | logger=migrator t=2024-04-25T14:22:17.093480523Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-04-25T14:22:17.094796459Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.318496ms grafana | logger=migrator t=2024-04-25T14:22:17.099052825Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-04-25T14:22:17.100035848Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=985.903µs grafana | logger=migrator t=2024-04-25T14:22:17.103864468Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-04-25T14:22:17.104615838Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=751.81µs grafana | logger=migrator t=2024-04-25T14:22:17.112592573Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-04-25T14:22:17.113995921Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.407198ms grafana | logger=migrator t=2024-04-25T14:22:17.123459505Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-04-25T14:22:17.133520686Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.057152ms grafana | logger=migrator t=2024-04-25T14:22:17.140341725Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-04-25T14:22:17.140932884Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=591.509µs grafana | logger=migrator t=2024-04-25T14:22:17.145680085Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-04-25T14:22:17.146441286Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=758.751µs grafana | logger=migrator t=2024-04-25T14:22:17.153556659Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-04-25T14:22:17.154719905Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.163326ms grafana | logger=migrator t=2024-04-25T14:22:17.159622168Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-04-25T14:22:17.160594052Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=972.784µs grafana | logger=migrator t=2024-04-25T14:22:17.165041879Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-04-25T14:22:17.165410914Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=369.535µs grafana | logger=migrator t=2024-04-25T14:22:17.171655976Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-04-25T14:22:17.172611609Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=959.672µs grafana | logger=migrator t=2024-04-25T14:22:17.177827497Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-04-25T14:22:17.177880358Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=49.551µs grafana | logger=migrator t=2024-04-25T14:22:17.183708034Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-04-25T14:22:17.186317448Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.610404ms grafana | logger=migrator t=2024-04-25T14:22:17.191881131Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-04-25T14:22:17.194332914Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.451653ms grafana | logger=migrator t=2024-04-25T14:22:17.197647677Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-04-25T14:22:17.197823929Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=176.752µs grafana | logger=migrator t=2024-04-25T14:22:17.204177152Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-04-25T14:22:17.207663538Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.485536ms grafana | logger=migrator t=2024-04-25T14:22:17.210993452Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-04-25T14:22:17.213607916Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.613884ms grafana | logger=migrator t=2024-04-25T14:22:17.216653426Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-04-25T14:22:17.217420916Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=767.36µs grafana | logger=migrator t=2024-04-25T14:22:17.222795336Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-04-25T14:22:17.223478475Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=683.159µs grafana | logger=migrator t=2024-04-25T14:22:17.226979601Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-04-25T14:22:17.22846685Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.48844ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.231884645Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-04-25T14:22:17.233351635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.46655ms grafana | logger=migrator t=2024-04-25T14:22:17.239485405Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-04-25T14:22:17.240428867Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=943.382µs grafana | logger=migrator t=2024-04-25T14:22:17.243812321Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-04-25T14:22:17.245085458Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.271707ms grafana | logger=migrator t=2024-04-25T14:22:17.248471842Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-04-25T14:22:17.248668725Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=196.153µs grafana | logger=migrator t=2024-04-25T14:22:17.254511101Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-04-25T14:22:17.254539682Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.341µs policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.8:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-04-25T14:22:58.829+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-25T14:22:59.037+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,434] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,435] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,436] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 14:22:22,440] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-25 14:22:22,445] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:22,447] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 14:22:22,452] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:22,458] INFO Socket connection established, initiating session, client: /172.17.0.8:53856, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:22,522] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000005a2800001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 14:22:22,527] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 14:22:23,481] INFO Cluster ID = lFyKLv7sTJO7XXtTZrPgZw (kafka.server.KafkaServer) kafka | [2024-04-25 14:22:23,485] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-04-25 14:22:23,532] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.3:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.8:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-04-25T14:22:49.260+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-04-25T14:22:49.312+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 41 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-04-25T14:22:49.313+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-04-25T14:22:51.175+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-04-25T14:22:51.271+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 86 ms. Found 7 JPA repository interfaces. policy-pap | [2024-04-25T14:22:51.702+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-04-25T14:22:51.703+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-04-25T14:22:52.272+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-04-25T14:22:52.281+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-04-25T14:22:52.283+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-04-25T14:22:52.283+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-04-25T14:22:52.376+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-04-25T14:22:52.377+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3001 ms policy-pap | [2024-04-25T14:22:52.775+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-04-25T14:22:52.827+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2024-04-25T14:22:53.212+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-04-25T14:22:53.310+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@fd9ebde policy-pap | [2024-04-25T14:22:53.312+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-04-25T14:22:53.341+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-04-25T14:22:54.933+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-04-25T14:22:54.942+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-04-25T14:22:55.404+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-04-25T14:22:55.848+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-04-25T14:22:55.999+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | connections.max.idle.ms = 600000 grafana | logger=migrator t=2024-04-25T14:22:17.257237357Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-pap | [2024-04-25T14:22:56.273+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d kafka | connections.max.reauth.ms = 0 grafana | logger=migrator t=2024-04-25T14:22:17.260189496Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.951639ms policy-apex-pdp | metrics.recording.level = INFO simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | -------------- policy-pap | allow.auto.create.topics = true prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" kafka | control.plane.listener.name = null grafana | logger=migrator t=2024-04-25T14:22:17.263831994Z level=info msg="Executing migration" id="Add encrypted dashboard json column" zookeeper | ===> User policy-apex-pdp | metrics.sample.window.ms = 30000 simulator | overriding logback.xml policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" kafka | controlled.shutdown.enable = true grafana | logger=migrator t=2024-04-25T14:22:17.266501199Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.668905ms zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] simulator | 2024-04-25 14:22:12,670 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" kafka | controlled.shutdown.max.retries = 3 grafana | logger=migrator t=2024-04-25T14:22:17.273637862Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" zookeeper | ===> Configuring ... policy-apex-pdp | receive.buffer.bytes = 65536 simulator | 2024-04-25 14:22:12,729 INFO org.onap.policy.models.simulators starting policy-pap | auto.include.jmx.reporter = true policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" kafka | controlled.shutdown.retry.backoff.ms = 5000 grafana | logger=migrator t=2024-04-25T14:22:17.273753024Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=116.001µs zookeeper | ===> Running preflight checks ... policy-apex-pdp | reconnect.backoff.max.ms = 1000 simulator | 2024-04-25 14:22:12,730 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-pap | auto.offset.reset = latest policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql prometheus | ts=2024-04-25T14:22:14.321Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" grafana | logger=migrator t=2024-04-25T14:22:17.277443421Z level=info msg="Executing migration" id="create quota table v1" zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... policy-apex-pdp | reconnect.backoff.ms = 50 simulator | 2024-04-25 14:22:12,916 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- kafka | controller.listener.names = null prometheus | ts=2024-04-25T14:22:14.324Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 grafana | logger=migrator t=2024-04-25T14:22:17.278282163Z level=info msg="Migration successfully executed" id="create quota table v1" duration=838.502µs grafana | logger=migrator t=2024-04-25T14:22:17.283303549Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-apex-pdp | request.timeout.ms = 30000 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... policy-pap | check.crcs = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 simulator | 2024-04-25 14:22:12,917 INFO org.onap.policy.models.simulators starting A&AI simulator zookeeper | ===> Launching ... policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:17.284113609Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=810.24µs simulator | 2024-04-25 14:22:13,012 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | ===> Launching zookeeper ... policy-pap | client.id = consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-1 policy-db-migrator | kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:17.29028375Z level=info msg="Executing migration" id="Update quota table charset" simulator | 2024-04-25 14:22:13,023 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,671] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | client.rack = policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.325Z caller=main.go:1129 level=info msg="Starting TSDB ..." kafka | controller.quorum.voters = [] policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T14:22:17.29031391Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=28.02µs simulator | 2024-04-25 14:22:13,025 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,678] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql prometheus | ts=2024-04-25T14:22:14.327Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 kafka | controller.quota.window.num = 11 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T14:22:17.294423414Z level=info msg="Executing migration" id="create plugin_setting table" simulator | 2024-04-25 14:22:13,029 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-25 14:22:16,678] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- prometheus | ts=2024-04-25T14:22:14.327Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 kafka | controller.quota.window.size.seconds = 1 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:17.295029103Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=606.299µs simulator | 2024-04-25 14:22:13,086 INFO Session workerName=node0 zookeeper | [2024-04-25 14:22:16,678] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | enable.auto.commit = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" kafka | controller.socket.timeout.ms = 30000 policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T14:22:17.299278998Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" simulator | 2024-04-25 14:22:13,599 INFO Using GSON for REST calls zookeeper | [2024-04-25 14:22:16,678] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.85µs kafka | create.topic.policy.class.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:17.30013049Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=845.502µs simulator | 2024-04-25 14:22:13,716 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} zookeeper | [2024-04-25 14:22:16,680] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" kafka | default.replication.factor = 1 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:17.305042283Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" simulator | 2024-04-25 14:22:13,729 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} zookeeper | [2024-04-25 14:22:16,680] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 kafka | delegation.token.expiry.check.interval.ms = 3600000 policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:17.31016328Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.120507ms simulator | 2024-04-25 14:22:13,737 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1551ms zookeeper | [2024-04-25 14:22:16,680] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | fetch.min.bytes = 1 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql prometheus | ts=2024-04-25T14:22:14.331Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=21.55µs wal_replay_duration=381.195µs wbl_replay_duration=190ns total_replay_duration=423.826µs kafka | delegation.token.expiry.time.ms = 86400000 policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-04-25T14:22:17.313815648Z level=info msg="Executing migration" id="Update plugin_setting table charset" simulator | 2024-04-25 14:22:13,737 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4288 ms. zookeeper | [2024-04-25 14:22:16,680] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) policy-pap | group.id = b957469a-2969-4bff-8555-1bfe3e4d4da0 policy-db-migrator | -------------- prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC kafka | delegation.token.master.key = null policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:17.313837638Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.97µs simulator | 2024-04-25 14:22:13,746 INFO org.onap.policy.models.simulators starting SDNC simulator zookeeper | [2024-04-25 14:22:16,681] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) policy-pap | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1153 level=info msg="TSDB started" kafka | delegation.token.max.lifetime.ms = 604800000 policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:17.31695937Z level=info msg="Executing migration" id="create session table" simulator | 2024-04-25 14:22:13,749 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | [2024-04-25 14:22:16,682] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- prometheus | ts=2024-04-25T14:22:14.333Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml kafka | delegation.token.secret.key = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T14:22:17.31776314Z level=info msg="Migration successfully executed" id="create session table" duration=803.55µs simulator | 2024-04-25 14:22:13,750 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,682] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | interceptor.classes = [] policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.334Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=738.919µs db_storage=1.18µs remote_storage=1.62µs web_handler=250ns query_engine=610ns scrape=208.233µs scrape_sd=114.681µs notify=20.15µs notify_sd=6.62µs rules=1.18µs tracing=4.43µs kafka | delete.records.purgatory.purge.interval.requests = 1 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T14:22:17.324039382Z level=info msg="Executing migration" id="Drop old table playlist table" simulator | 2024-04-25 14:22:13,759 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,682] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | internal.leave.group.on.close = true policy-db-migrator | prometheus | ts=2024-04-25T14:22:14.334Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." kafka | delete.topic.enable = true policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:17.324169694Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=129.222µs simulator | 2024-04-25 14:22:13,760 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-25 14:22:16,682] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql prometheus | ts=2024-04-25T14:22:14.334Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." kafka | early.start.listeners = null policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:17.327633649Z level=info msg="Executing migration" id="Drop old table playlist_item table" simulator | 2024-04-25 14:22:13,769 INFO Session workerName=node0 zookeeper | [2024-04-25 14:22:16,682] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-04-25T14:22:17.327763781Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=130.832µs simulator | 2024-04-25 14:22:13,851 INFO Using GSON for REST calls zookeeper | [2024-04-25 14:22:16,682] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] grafana | logger=migrator t=2024-04-25T14:22:17.331270297Z level=info msg="Executing migration" id="create playlist table v2" simulator | 2024-04-25 14:22:13,864 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} zookeeper | [2024-04-25 14:22:16,693] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | group.consumer.heartbeat.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T14:22:17.332582794Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.316117ms simulator | 2024-04-25 14:22:13,865 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} zookeeper | [2024-04-25 14:22:16,695] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | policy-apex-pdp | sasl.mechanism = GSSAPI kafka | group.consumer.max.heartbeat.interval.ms = 15000 grafana | logger=migrator t=2024-04-25T14:22:17.481236071Z level=info msg="Executing migration" id="create playlist item table v2" simulator | 2024-04-25 14:22:13,866 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1680ms zookeeper | [2024-04-25 14:22:16,695] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | max.poll.records = 500 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | group.consumer.max.session.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:17.4826573Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.421689ms simulator | 2024-04-25 14:22:13,866 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4893 ms. zookeeper | [2024-04-25 14:22:16,697] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | group.consumer.max.size = 2147483647 grafana | logger=migrator t=2024-04-25T14:22:17.494011439Z level=info msg="Executing migration" id="Update playlist table charset" simulator | 2024-04-25 14:22:13,867 INFO org.onap.policy.models.simulators starting SO simulator zookeeper | [2024-04-25 14:22:16,706] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metric.reporters = [] policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | group.consumer.min.heartbeat.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T14:22:17.494050839Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=40.73µs simulator | 2024-04-25 14:22:13,878 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | [2024-04-25 14:22:16,706] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.num.samples = 2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | group.consumer.min.session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-25T14:22:17.498897192Z level=info msg="Executing migration" id="Update playlist_item table charset" simulator | 2024-04-25 14:22:13,879 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,706] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | group.consumer.session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-25T14:22:17.498936633Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=40.911µs simulator | 2024-04-25 14:22:13,880 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,706] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | group.coordinator.new.enable = false grafana | logger=migrator t=2024-04-25T14:22:17.503506693Z level=info msg="Executing migration" id="Add playlist column created_at" simulator | 2024-04-25 14:22:13,880 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-25 14:22:16,706] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | group.coordinator.threads = 1 grafana | logger=migrator t=2024-04-25T14:22:17.508535049Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.029166ms simulator | 2024-04-25 14:22:13,921 INFO Session workerName=node0 zookeeper | [2024-04-25 14:22:16,706] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | group.initial.rebalance.delay.ms = 3000 grafana | logger=migrator t=2024-04-25T14:22:17.513402123Z level=info msg="Executing migration" id="Add playlist column updated_at" simulator | 2024-04-25 14:22:13,990 INFO Using GSON for REST calls zookeeper | [2024-04-25 14:22:16,706] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | group.max.session.timeout.ms = 1800000 grafana | logger=migrator t=2024-04-25T14:22:17.516756587Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.354044ms simulator | 2024-04-25 14:22:14,002 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} zookeeper | [2024-04-25 14:22:16,707] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | group.max.size = 2147483647 grafana | logger=migrator t=2024-04-25T14:22:17.527402516Z level=info msg="Executing migration" id="drop preferences table v2" simulator | 2024-04-25 14:22:14,003 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} zookeeper | [2024-04-25 14:22:16,707] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- policy-apex-pdp | security.protocol = PLAINTEXT kafka | group.min.session.timeout.ms = 6000 grafana | logger=migrator t=2024-04-25T14:22:17.52772091Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=317.574µs simulator | 2024-04-25 14:22:14,004 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1818ms zookeeper | [2024-04-25 14:22:16,707] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | retry.backoff.ms = 100 policy-db-migrator | policy-apex-pdp | security.providers = null kafka | initial.broker.registration.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:17.535233728Z level=info msg="Executing migration" id="drop preferences table v3" simulator | 2024-04-25 14:22:14,004 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4875 ms. zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | policy-apex-pdp | send.buffer.bytes = 131072 kafka | inter.broker.listener.name = PLAINTEXT grafana | logger=migrator t=2024-04-25T14:22:17.535519823Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=280.765µs simulator | 2024-04-25 14:22:14,018 INFO org.onap.policy.models.simulators starting VFC simulator zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:host.name=db21b226f583 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | session.timeout.ms = 45000 kafka | inter.broker.protocol.version = 3.6-IV2 grafana | logger=migrator t=2024-04-25T14:22:17.539561175Z level=info msg="Executing migration" id="create preferences table v3" simulator | 2024-04-25 14:22:14,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | kafka.metrics.polling.interval.secs = 10 grafana | logger=migrator t=2024-04-25T14:22:17.540959883Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.399818ms simulator | 2024-04-25 14:22:14,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | kafka.metrics.reporters = [] grafana | logger=migrator t=2024-04-25T14:22:17.545566494Z level=info msg="Executing migration" id="Update preferences table charset" simulator | 2024-04-25 14:22:14,021 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 14:22:16,708] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- policy-apex-pdp | ssl.cipher.suites = null kafka | leader.imbalance.check.interval.seconds = 300 grafana | logger=migrator t=2024-04-25T14:22:17.545594824Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=30.23µs simulator | 2024-04-25 14:22:14,022 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | leader.imbalance.per.broker.percentage = 10 grafana | logger=migrator t=2024-04-25T14:22:17.550340897Z level=info msg="Executing migration" id="Add column team_id in preferences" simulator | 2024-04-25 14:22:14,025 INFO Session workerName=node0 zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.55517615Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.832473ms simulator | 2024-04-25 14:22:14,089 INFO Using GSON for REST calls zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.engine.factory.class = null kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql grafana | logger=migrator t=2024-04-25T14:22:17.55898167Z level=info msg="Executing migration" id="Update team_id column values in preferences" simulator | 2024-04-25 14:22:14,097 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.key.password = null kafka | log.cleaner.backoff.ms = 15000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.559316064Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=335.284µs simulator | 2024-04-25 14:22:14,098 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T14:22:17.562798209Z level=info msg="Executing migration" id="Add column week_start in preferences" simulator | 2024-04-25 14:22:14,098 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1912ms zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | log.cleaner.delete.retention.ms = 86400000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.566088482Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.289573ms zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-25 14:22:14,098 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. policy-apex-pdp | ssl.keystore.key = null kafka | log.cleaner.enable = true policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.57500286Z level=info msg="Executing migration" id="Add column preferences.json_data" zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-25 14:22:14,099 INFO org.onap.policy.models.simulators started policy-apex-pdp | ssl.keystore.location = null kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:17.579579369Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.574199ms zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.keystore.password = null kafka | log.cleaner.io.buffer.size = 524288 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql grafana | logger=migrator t=2024-04-25T14:22:17.585067281Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.keystore.type = JKS kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:17.585135402Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.641µs zookeeper | [2024-04-25 14:22:16,709] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:17.589558111Z level=info msg="Executing migration" id="Add preferences index org_id" zookeeper | [2024-04-25 14:22:16,710] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.provider = null kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-db-migrator | -------------- policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-25T14:22:17.590591824Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.033784ms zookeeper | [2024-04-25 14:22:16,710] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.secure.random.implementation = null kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:17.597001538Z level=info msg="Executing migration" id="Add preferences index user_id" zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | log.cleaner.threads = 1 policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:17.598679399Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.676751ms zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.truststore.certificates = null kafka | log.cleanup.policy = [delete] policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T14:22:17.603380591Z level=info msg="Executing migration" id="create alert table v1" zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.truststore.location = null kafka | log.dir = /tmp/kafka-logs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T14:22:17.60482428Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.441559ms zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.truststore.password = null kafka | log.dirs = /var/lib/kafka/data policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:17.609332189Z level=info msg="Executing migration" id="add index alert org_id & id " zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | ssl.truststore.type = JKS kafka | log.flush.interval.messages = 9223372036854775807 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:17.610481474Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.146995ms zookeeper | [2024-04-25 14:22:16,710] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | log.flush.interval.ms = null policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:17.618351277Z level=info msg="Executing migration" id="add index alert state" zookeeper | [2024-04-25 14:22:16,711] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:17.62007403Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.722653ms zookeeper | [2024-04-25 14:22:16,711] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) policy-apex-pdp | [2024-04-25T14:22:59.265+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T14:22:17.625952457Z level=info msg="Executing migration" id="add index alert dashboard_id" zookeeper | [2024-04-25 14:22:16,712] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-25T14:22:59.266+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T14:22:17.626738817Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=786.44µs zookeeper | [2024-04-25 14:22:16,713] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-25T14:22:59.266+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979264 kafka | log.index.interval.bytes = 4096 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T14:22:17.630682219Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" zookeeper | [2024-04-25 14:22:16,714] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-apex-pdp | [2024-04-25T14:22:59.268+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-1, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Subscribed to topic(s): policy-pdp-pap kafka | log.index.size.max.bytes = 10485760 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.631330788Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=648.599µs zookeeper | [2024-04-25 14:22:16,714] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-apex-pdp | [2024-04-25T14:22:59.279+00:00|INFO|ServiceManager|main] service manager starting kafka | log.local.retention.bytes = -2 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.63688785Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" zookeeper | [2024-04-25 14:22:16,714] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | [2024-04-25T14:22:59.280+00:00|INFO|ServiceManager|main] service manager starting topics kafka | log.local.retention.ms = -2 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.638559052Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.666511ms zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | [2024-04-25T14:22:59.281+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting kafka | log.message.downconversion.enable = true policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql grafana | logger=migrator t=2024-04-25T14:22:17.643226923Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | [2024-04-25T14:22:59.301+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | log.message.format.version = 3.0-IV1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.644454909Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.229376ms zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | allow.auto.create.topics = true kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T14:22:17.648495992Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | auto.commit.interval.ms = 5000 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.661499443Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.002711ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 zookeeper | [2024-04-25 14:22:16,715] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.667069626Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 zookeeper | [2024-04-25 14:22:16,717] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | auto.offset.reset = latest policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.667799675Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=729.529µs policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | log.message.timestamp.type = CreateTime zookeeper | [2024-04-25 14:22:16,717] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-25T14:22:17.671041147Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" kafka | log.preallocate = false zookeeper | [2024-04-25 14:22:16,718] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-apex-pdp | check.crcs = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.67199321Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=951.753µs policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | log.retention.bytes = -1 zookeeper | [2024-04-25 14:22:16,718] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-04-25T14:22:17.676159275Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | log.retention.check.interval.ms = 300000 zookeeper | [2024-04-25 14:22:16,718] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | client.id = consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.676669361Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=509.276µs policy-pap | security.protocol = PLAINTEXT kafka | log.retention.hours = 168 zookeeper | [2024-04-25 14:22:16,736] INFO Logging initialized @471ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-apex-pdp | client.rack = policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.683224307Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-pap | security.providers = null kafka | log.retention.minutes = null zookeeper | [2024-04-25 14:22:16,821] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-pap | send.buffer.bytes = 131072 kafka | log.retention.ms = null zookeeper | [2024-04-25 14:22:16,822] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-pap | session.timeout.ms = 45000 kafka | log.roll.hours = 168 zookeeper | [2024-04-25 14:22:16,839] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) policy-db-migrator | -------------- policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | log.roll.jitter.hours = 0 zookeeper | [2024-04-25 14:22:16,869] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | log.roll.jitter.ms = null zookeeper | [2024-04-25 14:22:16,869] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-db-migrator | -------------- policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-pap | ssl.cipher.suites = null kafka | log.roll.ms = null zookeeper | [2024-04-25 14:22:16,870] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-db-migrator | policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | log.segment.bytes = 1073741824 zookeeper | [2024-04-25 14:22:16,872] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-db-migrator | policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-pap | ssl.endpoint.identification.algorithm = https kafka | log.segment.delete.delay.ms = 60000 zookeeper | [2024-04-25 14:22:16,879] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-pap | ssl.engine.factory.class = null kafka | max.connection.creation.rate = 2147483647 zookeeper | [2024-04-25 14:22:16,890] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-db-migrator | -------------- policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-pap | ssl.key.password = null kafka | max.connections = 2147483647 zookeeper | [2024-04-25 14:22:16,890] INFO Started @626ms (org.eclipse.jetty.server.Server) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-pap | ssl.keymanager.algorithm = SunX509 kafka | max.connections.per.ip = 2147483647 zookeeper | [2024-04-25 14:22:16,890] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-db-migrator | -------------- policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO kafka | max.connections.per.ip.overrides = zookeeper | [2024-04-25 14:22:16,895] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | ssl.keystore.certificate.chain = null kafka | max.incremental.fetch.session.cache.slots = 1000 zookeeper | [2024-04-25 14:22:16,896] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | ssl.keystore.key = null kafka | message.max.bytes = 1048588 zookeeper | [2024-04-25 14:22:16,898] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-pap | ssl.keystore.location = null kafka | metadata.log.dir = null zookeeper | [2024-04-25 14:22:16,899] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | -------------- policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | ssl.keystore.password = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 zookeeper | [2024-04-25 14:22:16,915] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | ssl.keystore.type = JKS kafka | metadata.log.max.snapshot.interval.ms = 3600000 zookeeper | [2024-04-25 14:22:16,916] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:17.683923197Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=698.28µs policy-pap | ssl.protocol = TLSv1.3 kafka | metadata.log.segment.bytes = 1073741824 zookeeper | [2024-04-25 14:22:16,917] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.687762857Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-04-25T14:22:17.688602027Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=838.76µs policy-apex-pdp | sasl.kerberos.service.name = null kafka | metadata.log.segment.min.bytes = 8388608 zookeeper | [2024-04-25 14:22:16,917] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-25T14:22:17.695501028Z level=info msg="Executing migration" id="Add column is_default" policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | metadata.log.segment.ms = 604800000 zookeeper | [2024-04-25 14:22:16,961] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T14:22:17.699526061Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.025703ms policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | metadata.max.idle.interval.ms = 500 zookeeper | [2024-04-25 14:22:16,961] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T14:22:17.705370537Z level=info msg="Executing migration" id="Add column frequency" policy-apex-pdp | sasl.login.callback.handler.class = null kafka | metadata.max.retention.bytes = 104857600 zookeeper | [2024-04-25 14:22:16,966] INFO Snapshot loaded in 48 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T14:22:17.708937403Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.566316ms policy-apex-pdp | sasl.login.class = null kafka | metadata.max.retention.ms = 604800000 zookeeper | [2024-04-25 14:22:16,967] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.712492951Z level=info msg="Executing migration" id="Add column send_reminder" kafka | metric.reporters = [] zookeeper | [2024-04-25 14:22:16,967] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T14:22:17.716141298Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.647627ms kafka | metrics.num.samples = 2 zookeeper | [2024-04-25 14:22:16,984] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-db-migrator | policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:17.719739065Z level=info msg="Executing migration" id="Add column disable_resolve_message" zookeeper | [2024-04-25 14:22:16,985] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T14:22:17.723324393Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.582318ms policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | metrics.sample.window.ms = 30000 zookeeper | [2024-04-25 14:22:17,002] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) grafana | logger=migrator t=2024-04-25T14:22:17.728004504Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | min.insync.replicas = 1 zookeeper | [2024-04-25 14:22:17,003] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T14:22:17.728965107Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=960.044µs policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | node.id = 1 zookeeper | [2024-04-25 14:22:21,025] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | num.io.threads = 8 grafana | logger=migrator t=2024-04-25T14:22:17.735873337Z level=info msg="Executing migration" id="Update alert table charset" policy-pap | ssl.truststore.password = null policy-db-migrator | policy-db-migrator | kafka | num.network.threads = 3 grafana | logger=migrator t=2024-04-25T14:22:17.735915867Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=41.35µs policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | num.partitions = 1 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:17.742858679Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-pap | kafka | num.recovery.threads.per.data.dir = 1 policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T14:22:17.742899459Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=42.391µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | num.replica.alter.log.dirs.threads = null policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T14:22:17.748567493Z level=info msg="Executing migration" id="create notification_journal table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | num.replica.fetchers = 1 policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T14:22:17.74986554Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.297327ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:56.439+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054976437 kafka | offset.metadata.max.bytes = 4096 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T14:22:17.756141012Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-db-migrator | policy-pap | [2024-04-25T14:22:56.441+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-1, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Subscribed to topic(s): policy-pdp-pap kafka | offsets.commit.required.acks = -1 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T14:22:17.757737423Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.596481ms policy-db-migrator | policy-pap | [2024-04-25T14:22:56.442+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | offsets.commit.timeout.ms = 5000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:17.762852641Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-pap | allow.auto.create.topics = true kafka | offsets.load.buffer.size = 5242880 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T14:22:17.763799003Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=945.762µs kafka | offsets.retention.check.interval.ms = 600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T14:22:17.767284308Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | offsets.retention.minutes = 10080 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.768331862Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.047574ms policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 grafana | logger=migrator t=2024-04-25T14:22:17.776418238Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T14:22:17.778028599Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.608981ms policy-db-migrator | policy-apex-pdp | security.protocol = PLAINTEXT kafka | offsets.topic.segment.bytes = 104857600 policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-25T14:22:17.78428747Z level=info msg="Executing migration" id="Add for to alert table" policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-apex-pdp | security.providers = null kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T14:22:17.788113331Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.825451ms policy-db-migrator | -------------- policy-apex-pdp | send.buffer.bytes = 131072 kafka | password.encoder.iterations = 4096 policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-04-25T14:22:17.792571779Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | session.timeout.ms = 45000 kafka | password.encoder.key.length = 128 policy-pap | client.rack = grafana | logger=migrator t=2024-04-25T14:22:17.796324229Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.7503ms policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | password.encoder.keyfactory.algorithm = null grafana | logger=migrator t=2024-04-25T14:22:17.801949193Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | password.encoder.old.secret = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.802321017Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=371.654µs kafka | password.encoder.secret = null policy-db-migrator | policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T14:22:17.80636075Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder grafana | logger=migrator t=2024-04-25T14:22:17.807997282Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.636272ms policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | process.roles = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.815281916Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | producer.id.expiration.check.interval.ms = 600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.engine.factory.class = null policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T14:22:17.817154871Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.872405ms kafka | producer.id.expiration.ms = 86400000 policy-db-migrator | -------------- policy-apex-pdp | ssl.key.password = null grafana | logger=migrator t=2024-04-25T14:22:17.824015812Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-db-migrator | policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | producer.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-04-25T14:22:17.827815901Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.800319ms policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | queued.max.request.bytes = -1 policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:17.910022778Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-apex-pdp | ssl.keystore.key = null kafka | queued.max.requests = 500 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-25T14:22:17.910102929Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=81.591µs policy-apex-pdp | ssl.keystore.location = null kafka | quota.window.num = 11 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.918493569Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" kafka | quota.window.size.seconds = 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T14:22:17.919876857Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.397449ms kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T14:22:17.927621939Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-25T14:22:17.928588971Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=962.622µs kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | policy-apex-pdp | ssl.provider = null policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-25T14:22:17.936666347Z level=info msg="Executing migration" id="Drop old annotation table v4" kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T14:22:17.937011622Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=345.155µs kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | -------------- policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap kafka | remote.log.manager.thread.pool.size = 10 policy-pap | group.instance.id = null policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T14:22:17.942014697Z level=info msg="Executing migration" id="create annotation table v5" kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T14:22:17.943085061Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.069824ms kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:17.949769408Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | remote.log.metadata.manager.class.path = null policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.951226928Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.45996ms kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-pap | interceptor.classes = [] policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:17.957407888Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | remote.log.metadata.manager.listener.name = null policy-pap | internal.leave.group.on.close = true policy-apex-pdp | policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | remote.log.reader.max.pending.tasks = 100 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T14:22:17.95904122Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.630602ms policy-db-migrator | -------------- kafka | remote.log.reader.threads = 10 policy-pap | isolation.level = read_uncommitted policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T14:22:17.965890999Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | remote.log.storage.manager.class.name = null policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979310 grafana | logger=migrator t=2024-04-25T14:22:17.971381021Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=5.486192ms policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | [2024-04-25T14:22:59.311+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-25T14:22:17.982072492Z level=info msg="Executing migration" id="add index annotation 3 v3" kafka | remote.log.storage.manager.class.path = null policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | [2024-04-25T14:22:59.313+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a70cbd7d-fac1-4b6c-9376-616c76b1b351, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-25T14:22:17.983183636Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.114764ms kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-db-migrator | policy-pap | max.poll.records = 500 policy-apex-pdp | [2024-04-25T14:22:59.326+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-25T14:22:17.98806602Z level=info msg="Executing migration" id="add index annotation 4 v3" kafka | remote.log.storage.system.enable = false policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | acks = -1 grafana | logger=migrator t=2024-04-25T14:22:17.989727122Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.660242ms kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | -------------- policy-pap | metric.reporters = [] policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T14:22:17.996005764Z level=info msg="Executing migration" id="Update annotation table charset" kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | metrics.num.samples = 2 policy-apex-pdp | batch.size = 16384 grafana | logger=migrator t=2024-04-25T14:22:17.996035434Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=38.1µs kafka | replica.fetch.min.bytes = 1 policy-db-migrator | -------------- policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T14:22:17.999256317Z level=info msg="Executing migration" id="Add column region_id to annotation table" kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:18.0032603Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.003723ms kafka | replica.fetch.wait.max.ms = 500 policy-apex-pdp | buffer.memory = 33554432 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:18.009804197Z level=info msg="Executing migration" id="Drop category_id index" policy-apex-pdp | client.dns.lookup = use_all_dns_ips kafka | replica.high.watermark.checkpoint.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T14:22:18.011107844Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.304987ms policy-apex-pdp | client.id = producer-1 policy-db-migrator | > upgrade 0470-pdp.sql policy-pap | metrics.recording.level = INFO kafka | replica.lag.time.max.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:18.015878337Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-apex-pdp | compression.type = none policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 kafka | replica.selector.class = null grafana | logger=migrator t=2024-04-25T14:22:18.021999678Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.120991ms policy-apex-pdp | connections.max.idle.ms = 540000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | replica.socket.receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-25T14:22:18.028651537Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-apex-pdp | delivery.timeout.ms = 120000 policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:18.029716251Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.064003ms policy-apex-pdp | enable.idempotence = true policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 kafka | replication.quota.window.num = 11 grafana | logger=migrator t=2024-04-25T14:22:18.032759831Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-apex-pdp | interceptor.classes = [] policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | replication.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-04-25T14:22:18.033625762Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=866.121µs policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-pap | request.timeout.ms = 30000 kafka | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:18.168294207Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-apex-pdp | linger.ms = 0 policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | reserved.broker.max.id = 1000 grafana | logger=migrator t=2024-04-25T14:22:18.169658016Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.367678ms policy-apex-pdp | max.block.ms = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-pap | sasl.client.callback.handler.class = null kafka | sasl.client.callback.handler.class = null policy-apex-pdp | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-25T14:22:18.256572738Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-db-migrator | -------------- kafka | sasl.enabled.mechanisms = [GSSAPI] policy-apex-pdp | max.request.size = 1048576 grafana | logger=migrator t=2024-04-25T14:22:18.271208362Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.637284ms policy-db-migrator | kafka | sasl.jaas.config = null policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:18.274491055Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:18.274980792Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=489.667µs policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-04-25T14:22:18.326777679Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | -------------- kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T14:22:18.327382636Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=606.477µs kafka | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T14:22:18.33600386Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:18.336455046Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=451.866µs kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | policy-apex-pdp | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-25T14:22:18.405059806Z level=info msg="Executing migration" id="drop table annotation_tag_v2" kafka | sasl.login.callback.handler.class = null policy-db-migrator | policy-apex-pdp | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-25T14:22:18.405545903Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=489.227µs kafka | sasl.login.class = null policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-apex-pdp | partitioner.class = null grafana | logger=migrator t=2024-04-25T14:22:18.475039614Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" kafka | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- policy-apex-pdp | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-25T14:22:18.47546397Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=426.756µs kafka | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-25T14:22:18.6459479Z level=info msg="Executing migration" id="Add created time to annotation table" policy-pap | sasl.jaas.config = null kafka | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T14:22:18.652470306Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.526376ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T14:22:18.666099137Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | policy-apex-pdp | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:18.673197392Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=7.097285ms policy-pap | sasl.kerberos.service.name = null kafka | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-apex-pdp | retries = 2147483647 grafana | logger=migrator t=2024-04-25T14:22:18.937590497Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:18.939515612Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.924375ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:19.033146107Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-pap | sasl.login.callback.handler.class = null kafka | sasl.mechanism.controller.protocol = GSSAPI policy-db-migrator | -------------- policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T14:22:19.034433745Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.290078ms policy-pap | sasl.login.class = null kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-db-migrator | policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T14:22:19.129061077Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-pap | sasl.login.connect.timeout.ms = null kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:19.129372612Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=313.095µs policy-pap | sasl.login.read.timeout.ms = null kafka | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T14:22:19.165349412Z level=info msg="Executing migration" id="Add epoch_end column" kafka | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:19.168540004Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.192942ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:19.275608805Z level=info msg="Executing migration" id="Add index for epoch_end" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:19.276831791Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.225196ms policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-04-25T14:22:19.310283147Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:19.310427509Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=144.932µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:19.314483504Z level=info msg="Executing migration" id="Move region to single row" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T14:22:19.315047921Z level=info msg="Migration successfully executed" id="Move region to single row" duration=564.797µs kafka | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T14:22:19.343227917Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" kafka | sasl.server.callback.handler.class = null policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:19.344583075Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.355788ms policy-pap | sasl.mechanism = GSSAPI kafka | sasl.server.max.receive.size = 524288 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | security.inter.broker.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:19.351473798Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.expected.audience = null kafka | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:19.352095336Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=621.868µs policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.expected.issuer = null kafka | server.max.startup.time.ms = 9223372036854775807 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-04-25T14:22:19.501669813Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:19.503240314Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.571901ms policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-04-25T14:22:19.58828865Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | socket.listen.backlog.size = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:19.58978241Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.49404ms policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | socket.receive.buffer.bytes = 102400 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:19.754128605Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | socket.request.max.bytes = 104857600 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:19.755666115Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.53671ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | socket.send.buffer.bytes = 102400 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-04-25T14:22:19.945855405Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | ssl.cipher.suites = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:19.946672885Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=819.38µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | ssl.client.auth = none policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-04-25T14:22:20.294906396Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:20.295115049Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=212.433µs policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | ssl.endpoint.identification.algorithm = https policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:20.383367017Z level=info msg="Executing migration" id="create test_data table" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT kafka | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:20.384966529Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.600632ms kafka | ssl.key.password = null policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-04-25T14:22:20.402019856Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-pap | security.providers = null kafka | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:20.403349624Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.329178ms policy-pap | send.buffer.bytes = 131072 kafka | ssl.keystore.certificate.chain = null policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T14:22:20.411905728Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-pap | session.timeout.ms = 45000 kafka | ssl.keystore.key = null policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:20.413388368Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.47934ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | ssl.keystore.location = null policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T14:22:20.420397441Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T14:22:20.421879651Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.48422ms policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-pap | ssl.cipher.suites = null kafka | ssl.principal.mapping.rules = DEFAULT grafana | logger=migrator t=2024-04-25T14:22:20.514334686Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T14:22:20.514846312Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=507.006µs policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-pap | ssl.endpoint.identification.algorithm = https kafka | ssl.provider = null grafana | logger=migrator t=2024-04-25T14:22:20.616054114Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-apex-pdp | ssl.key.password = null policy-db-migrator | -------------- kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T14:22:20.616875986Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=825.342µs policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:20.934975852Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | policy-apex-pdp | ssl.keystore.key = null kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T14:22:20.935145614Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=172.742µs policy-db-migrator | > upgrade 0580-toscadatatypes.sql kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T14:22:20.986205936Z level=info msg="Executing migration" id="create team table" kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T14:22:20.987080469Z level=info msg="Migration successfully executed" id="create team table" duration=877.123µs policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T14:22:21.14848923Z level=info msg="Executing migration" id="add index team.org_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | ssl.key.password = null kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T14:22:21.150365946Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.879926ms kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-04-25T14:22:21.374630527Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.type = JKS kafka | transaction.partition.verification.enable = true grafana | logger=migrator t=2024-04-25T14:22:21.376715705Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.088588ms policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 grafana | logger=migrator t=2024-04-25T14:22:21.569643621Z level=info msg="Executing migration" id="Add column uid in team" policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.provider = null kafka | transaction.state.log.load.buffer.size = 5242880 grafana | logger=migrator t=2024-04-25T14:22:21.573349971Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.70972ms policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-pap | ssl.keystore.key = null policy-apex-pdp | ssl.secure.random.implementation = null kafka | transaction.state.log.min.isr = 2 grafana | logger=migrator t=2024-04-25T14:22:21.736205567Z level=info msg="Executing migration" id="Update uid column values in team" policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | transaction.state.log.num.partitions = 50 grafana | logger=migrator t=2024-04-25T14:22:21.736491111Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=288.934µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.keystore.password = null policy-apex-pdp | ssl.truststore.certificates = null kafka | transaction.state.log.replication.factor = 3 grafana | logger=migrator t=2024-04-25T14:22:21.782949893Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS policy-apex-pdp | ssl.truststore.location = null kafka | transaction.state.log.segment.bytes = 104857600 grafana | logger=migrator t=2024-04-25T14:22:21.783918026Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=974.463µs policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.truststore.password = null kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-04-25T14:22:21.919521862Z level=info msg="Executing migration" id="create team member table" policy-db-migrator | policy-pap | ssl.provider = null policy-apex-pdp | ssl.truststore.type = JKS kafka | unclean.leader.election.enable = false grafana | logger=migrator t=2024-04-25T14:22:21.920569096Z level=info msg="Migration successfully executed" id="create team member table" duration=1.049544ms policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | transaction.timeout.ms = 60000 kafka | unstable.api.versions.enable = false grafana | logger=migrator t=2024-04-25T14:22:21.992652896Z level=info msg="Executing migration" id="add index team_member.org_id" policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | transactional.id = null kafka | zookeeper.clientCnxnSocket = null grafana | logger=migrator t=2024-04-25T14:22:21.993797072Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.144326ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | ssl.truststore.certificates = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | zookeeper.connect = zookeeper:2181 grafana | logger=migrator t=2024-04-25T14:22:22.280481552Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null policy-apex-pdp | kafka | zookeeper.connection.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:22.282138355Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.659503ms policy-db-migrator | policy-pap | ssl.truststore.password = null policy-apex-pdp | [2024-04-25T14:22:59.335+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | zookeeper.max.in.flight.requests = 10 grafana | logger=migrator t=2024-04-25T14:22:22.409369795Z level=info msg="Executing migration" id="add index team_member.team_id" policy-db-migrator | policy-pap | ssl.truststore.type = JKS policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-04-25T14:22:22.411007818Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.590392ms policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | zookeeper.metadata.migration.min.batch.size = 200 grafana | logger=migrator t=2024-04-25T14:22:22.43394493Z level=info msg="Executing migration" id="Add column email to team table" policy-db-migrator | -------------- policy-pap | policy-apex-pdp | [2024-04-25T14:22:59.351+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054979351 kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-04-25T14:22:22.441721096Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.774256ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a70cbd7d-fac1-4b6c-9376-616c76b1b351, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | zookeeper.set.acl = false policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-04-25T14:22:22.546012275Z level=info msg="Executing migration" id="Add column external to team_member table" kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | policy-pap | [2024-04-25T14:22:56.448+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054976448 policy-apex-pdp | [2024-04-25T14:22:59.352+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-04-25T14:22:22.553680899Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.669314ms kafka | zookeeper.ssl.client.enable = false policy-db-migrator | policy-pap | [2024-04-25T14:22:56.449+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-04-25T14:22:59.354+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-04-25T14:22:22.621281089Z level=info msg="Executing migration" id="Add column permission to team_member table" kafka | zookeeper.ssl.crl.enable = false policy-pap | [2024-04-25T14:22:56.768+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | [2024-04-25T14:22:59.354+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-04-25T14:22:22.628527768Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.247089ms policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | zookeeper.ssl.enabled.protocols = null policy-apex-pdp | [2024-04-25T14:22:59.364+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener grafana | logger=migrator t=2024-04-25T14:22:22.857564503Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | [2024-04-25T14:22:56.921+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | -------------- kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher grafana | logger=migrator t=2024-04-25T14:22:22.859239566Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.678113ms policy-pap | [2024-04-25T14:22:57.157+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6a3a56de, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2ed84be9, org.springframework.security.web.context.SecurityContextHolderFilter@23d23d98, org.springframework.security.web.header.HeaderWriterFilter@7d483ebe, org.springframework.security.web.authentication.logout.LogoutFilter@762f8ff6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5e34a84b, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@40db6136, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6ee1ddcf, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@400e741, org.springframework.security.web.access.ExceptionTranslationFilter@21ba0d33, org.springframework.security.web.access.intercept.AuthorizationFilter@522f0bb8] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | zookeeper.ssl.keystore.location = null policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher grafana | logger=migrator t=2024-04-25T14:22:22.964660521Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-pap | [2024-04-25T14:22:57.920+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.password = null policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a grafana | logger=migrator t=2024-04-25T14:22:22.96612215Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.462359ms policy-pap | [2024-04-25T14:22:58.040+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | kafka | zookeeper.ssl.keystore.type = null policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted grafana | logger=migrator t=2024-04-25T14:22:23.187095517Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | [2024-04-25T14:22:58.068+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | kafka | zookeeper.ssl.ocsp.enable = false policy-apex-pdp | [2024-04-25T14:22:59.365+00:00|INFO|ServiceManager|main] service manager starting Create REST server grafana | logger=migrator t=2024-04-25T14:22:23.188647088Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.551661ms policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | zookeeper.ssl.protocol = TLSv1.2 policy-apex-pdp | [2024-04-25T14:22:59.395+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-25T14:22:23.305546197Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.location = null policy-apex-pdp | [] grafana | logger=migrator t=2024-04-25T14:22:23.30643788Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=894.293µs policy-pap | [2024-04-25T14:22:58.084+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) kafka | zookeeper.ssl.truststore.password = null policy-apex-pdp | [2024-04-25T14:22:59.397+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T14:22:23.427460176Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.type = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4fafdadb-f031-4653-ad75-cc11e2020b8b","timestampMs":1714054979366,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-25T14:22:23.428402738Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=948.722µs policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | kafka | (kafka.server.KafkaConfig) policy-apex-pdp | [2024-04-25T14:22:59.652+00:00|INFO|ServiceManager|main] service manager starting Rest Server grafana | logger=migrator t=2024-04-25T14:22:23.487689935Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | kafka | [2024-04-25 14:22:23,560] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|ServiceManager|main] service manager starting grafana | logger=migrator t=2024-04-25T14:22:23.489328748Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.641973ms policy-pap | [2024-04-25T14:22:58.085+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | > upgrade 0640-toscanodetypes.sql kafka | [2024-04-25 14:22:23,561] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters grafana | logger=migrator t=2024-04-25T14:22:23.832055249Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-pap | [2024-04-25T14:22:58.087+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@206b959c policy-db-migrator | -------------- kafka | [2024-04-25 14:22:23,562] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-04-25T14:22:59.653+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T14:22:23.833287187Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.234638ms policy-pap | [2024-04-25T14:22:58.098+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) kafka | [2024-04-25 14:22:23,566] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ServiceManager|main] service manager started grafana | logger=migrator t=2024-04-25T14:22:23.884575504Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | [2024-04-25T14:22:58.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | -------------- kafka | [2024-04-25 14:22:23,592] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ServiceManager|main] service manager started grafana | logger=migrator t=2024-04-25T14:22:23.885467836Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=895.772µs policy-pap | allow.auto.create.topics = true policy-db-migrator | kafka | [2024-04-25 14:22:23,597] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. grafana | logger=migrator t=2024-04-25T14:22:24.208060224Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | kafka | [2024-04-25 14:22:23,604] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T14:22:24.208975627Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=919.493µs policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-04-25T14:22:59.662+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql kafka | [2024-04-25 14:22:23,605] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-25T14:22:24.349454657Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-apex-pdp | [2024-04-25T14:22:59.851+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | -------------- kafka | [2024-04-25 14:22:23,606] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T14:22:24.349932754Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=481.787µs policy-apex-pdp | [2024-04-25T14:22:59.851+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 14:22:23,616] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-25T14:22:24.629822061Z level=info msg="Executing migration" id="create tag table" policy-apex-pdp | [2024-04-25T14:22:59.853+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:23,657] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T14:22:24.631077298Z level=info msg="Migration successfully executed" id="create tag table" duration=1.258166ms policy-apex-pdp | [2024-04-25T14:22:59.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | kafka | [2024-04-25 14:22:23,672] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | client.id = consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3 grafana | logger=migrator t=2024-04-25T14:22:24.810730572Z level=info msg="Executing migration" id="add index tag.key_value" policy-apex-pdp | [2024-04-25T14:22:59.861+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] (Re-)joining group policy-db-migrator | kafka | [2024-04-25 14:22:23,681] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | client.rack = grafana | logger=migrator t=2024-04-25T14:22:24.812406334Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.677592ms policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Request joining group due to: need to re-join with the given member-id: consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b policy-db-migrator | > upgrade 0660-toscaparameter.sql kafka | [2024-04-25 14:22:23,713] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T14:22:24.888649441Z level=info msg="Executing migration" id="create login attempt table" policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,139] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:24.889956049Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.309448ms policy-apex-pdp | [2024-04-25T14:22:59.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] (Re-)joining group kafka | [2024-04-25 14:22:24,157] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | enable.auto.commit = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T14:22:24.9400011Z level=info msg="Executing migration" id="add index login_attempt.username" policy-apex-pdp | [2024-04-25T14:23:00.311+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls kafka | [2024-04-25 14:22:24,157] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:24.941134695Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.159145ms policy-apex-pdp | [2024-04-25T14:23:00.313+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-25 14:22:24,162] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:25.073182731Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-apex-pdp | [2024-04-25T14:23:02.908+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b', protocol='range'} policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-25 14:22:24,177] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:25.074638741Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.45923ms policy-apex-pdp | [2024-04-25T14:23:02.918+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Finished assignment for group at generation 1: {consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | fetch.min.bytes = 1 kafka | [2024-04-25 14:22:24,198] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-04-25T14:22:25.119058185Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-apex-pdp | [2024-04-25T14:23:02.947+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b', protocol='range'} kafka | [2024-04-25 14:22:24,199] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:25.137471405Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=18.41676ms policy-pap | group.id = b957469a-2969-4bff-8555-1bfe3e4d4da0 policy-apex-pdp | [2024-04-25T14:23:02.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-25 14:22:24,200] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-04-25 14:22:24,200] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | policy-apex-pdp | [2024-04-25T14:23:02.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-25 14:22:24,201] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-25 14:22:24,212] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-apex-pdp | [2024-04-25T14:23:02.993+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Found no committed offset for partition policy-pdp-pap-0 policy-pap | interceptor.classes = [] kafka | [2024-04-25 14:22:24,213] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-apex-pdp | [2024-04-25T14:23:03.034+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2, groupId=5f0ab5d6-63b3-4b5a-a200-3d330f0096ce] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-25T14:22:25.333286838Z level=info msg="Executing migration" id="create login_attempt v2" policy-apex-pdp | [2024-04-25T14:23:19.364+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-25 14:22:24,245] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T14:22:25.334505855Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.219667ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-25 14:22:24,282] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714054944255,1714054944255,1,0,0,72057618239062017,258,0,27 grafana | logger=migrator t=2024-04-25T14:22:25.415761071Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-apex-pdp | [2024-04-25T14:23:19.386+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted kafka | (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T14:22:25.41718971Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.428979ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 14:22:24,284] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T14:22:25.574774232Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-apex-pdp | [2024-04-25T14:23:19.389+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-25 14:22:24,547] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) grafana | logger=migrator t=2024-04-25T14:22:25.57524874Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=475.688µs policy-apex-pdp | [2024-04-25T14:23:19.540+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-25 14:22:24,553] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-25T14:22:25.644076065Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | max.poll.records = 500 kafka | [2024-04-25 14:22:24,563] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-25T14:22:25.645133609Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.057914ms policy-apex-pdp | [2024-04-25T14:23:19.558+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-25 14:22:24,564] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-25T14:22:25.785680131Z level=info msg="Executing migration" id="create user auth table" policy-apex-pdp | [2024-04-25T14:23:19.559+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | metric.reporters = [] kafka | [2024-04-25 14:22:24,580] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-25T14:22:25.78708183Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.403769ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-pap | metrics.num.samples = 2 kafka | [2024-04-25 14:22:24,636] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-25T14:22:25.847737355Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-apex-pdp | [2024-04-25T14:23:19.559+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO kafka | [2024-04-25 14:22:24,633] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T14:22:25.849242735Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.50427ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-25 14:22:24,658] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-04-25T14:22:26.072863256Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-25 14:22:24,748] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:26.072978298Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=118.462µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-25 14:22:24,750] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-04-25T14:22:26.214608074Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-25 14:22:24,750] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-04-25T14:22:26.222962568Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.359554ms policy-apex-pdp | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-25 14:22:24,753] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:26.300958099Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | request.timeout.ms = 30000 kafka | [2024-04-25 14:22:24,774] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-04-25T14:22:26.310057202Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=9.103793ms policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-04-25 14:22:24,797] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-25T14:22:26.511610983Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-apex-pdp | [2024-04-25T14:23:19.569+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-25 14:22:24,800] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) grafana | logger=migrator t=2024-04-25T14:22:26.521191013Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=9.57982ms policy-apex-pdp | [2024-04-25T14:23:19.593+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-04-25 14:22:24,803] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:26.603621404Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,808] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T14:23:19.595+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T14:22:26.612753449Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=9.137105ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-25 14:22:24,813] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:26.835530297Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-25 14:22:24,817] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:26.837352552Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.826385ms policy-db-migrator | policy-apex-pdp | [2024-04-25T14:23:19.605+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T14:22:27.028648263Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-25 14:22:24,830] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.034808998Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.164644ms policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-apex-pdp | [2024-04-25T14:23:19.606+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-04-25 14:22:24,830] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-04-25T14:22:27.218843439Z level=info msg="Executing migration" id="create server_lock table" policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:24,838] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.220508122Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.663043ms policy-pap | sasl.login.class = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T14:23:19.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T14:22:27.348288059Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | kafka | [2024-04-25 14:22:24,842] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:27.349867681Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.580802ms policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T14:23:19.718+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T14:22:27.434073585Z level=info msg="Executing migration" id="create user auth token table" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:27.435641837Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.568152ms kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T14:23:19.718+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T14:22:27.627546906Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" kafka | [2024-04-25 14:22:24,850] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,851] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-04-25T14:23:56.157+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.5 - policyadmin [25/Apr/2024:14:23:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-25T14:22:27.629710045Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.164709ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-04-25T14:24:56.079+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.5 - policyadmin [25/Apr/2024:14:24:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-25T14:22:27.754748186Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" kafka | [2024-04-25 14:22:24,856] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:27.755996263Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.250237ms policy-db-migrator | kafka | [2024-04-25 14:22:24,857] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T14:22:27.768220878Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-04-25 14:22:24,858] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.769904732Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.683594ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 14:22:24,858] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T14:22:27.778862783Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" kafka | [2024-04-25 14:22:24,858] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T14:22:27.784301257Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.437954ms kafka | [2024-04-25 14:22:24,859] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.787742644Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,862] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T14:22:27.788754888Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.012254ms kafka | [2024-04-25 14:22:24,862] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:27.792796483Z level=info msg="Executing migration" id="create cache_data table" policy-db-migrator | kafka | [2024-04-25 14:22:24,869] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:27.793644374Z level=info msg="Migration successfully executed" id="create cache_data table" duration=847.141µs policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-04-25 14:22:24,869] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-25T14:22:27.886386716Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,870] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-25T14:22:27.88895844Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=2.570954ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-04-25 14:22:24,872] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-25T14:22:27.898112135Z level=info msg="Executing migration" id="create short_url table v1" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,872] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-25T14:22:27.898999887Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=887.922µs policy-db-migrator | kafka | [2024-04-25 14:22:24,872] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-25T14:22:27.904465581Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:27.906075633Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.609752ms kafka | [2024-04-25 14:22:24,878] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-pap | security.providers = null policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-25T14:22:27.910566504Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-25 14:22:24,879] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,884] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-25T14:22:27.910627635Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=63.401µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 14:22:24,885] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:27.914977504Z level=info msg="Executing migration" id="delete alert_definition table" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,892] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:27.915054145Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=76.761µs policy-db-migrator | kafka | [2024-04-25 14:22:24,892] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T14:22:27.91911313Z level=info msg="Executing migration" id="recreate alert_definition table" policy-db-migrator | kafka | [2024-04-25 14:22:24,892] INFO Kafka startTimeMs: 1714054944883 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T14:22:27.92052015Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.403619ms policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T14:22:27.925827092Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,894] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T14:22:27.927392072Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.5656ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-25T14:22:27.937281717Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,894] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T14:22:27.938321371Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.039174ms policy-db-migrator | kafka | [2024-04-25 14:22:24,895] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T14:22:27.943863246Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-db-migrator | kafka | [2024-04-25 14:22:24,896] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T14:22:27.943954908Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=92.052µs policy-db-migrator | > upgrade 0780-toscarequirements.sql kafka | [2024-04-25 14:22:24,916] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T14:22:27.948000694Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,926] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T14:22:27.949504044Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.50572ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) kafka | [2024-04-25 14:22:24,933] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T14:22:27.953831963Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:24,981] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T14:22:27.954751505Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=919.502µs policy-db-migrator | kafka | [2024-04-25 14:22:29,917] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.959093273Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | ssl.provider = null policy-db-migrator | kafka | [2024-04-25 14:22:29,918] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.960143288Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.014645ms policy-pap | ssl.secure.random.implementation = null policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | [2024-04-25 14:22:58,591] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-25T14:22:27.96398409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,598] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-25T14:22:27.965032375Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.045985ms policy-pap | ssl.truststore.certificates = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 14:22:58,639] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.971138258Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,648] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.978145233Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.006224ms policy-pap | ssl.truststore.password = null policy-db-migrator | kafka | [2024-04-25 14:22:58,665] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UDjaTEkFR6iaxHll2hUQXA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.982664914Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | ssl.truststore.type = JKS policy-db-migrator | kafka | [2024-04-25 14:22:58,666] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:27.983350413Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=683.259µs policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | [2024-04-25 14:22:58,667] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.98676158Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,668] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.986836971Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=75.641µs policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | [2024-04-25 14:22:58,671] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.989676839Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,671] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.99115574Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.478431ms policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978104 policy-db-migrator | kafka | [2024-04-25 14:22:58,704] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.996104847Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | [2024-04-25T14:22:58.104+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | kafka | [2024-04-25 14:22:58,706] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:27.997275333Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.170886ms policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-04-25 14:22:58,707] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.000823521Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@54b35809 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,709] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.001849855Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.026104ms policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-25 14:22:58,710] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.008656347Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-04-25T14:22:58.105+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-04-25 14:22:58,710] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.00886936Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=212.613µs policy-db-migrator | -------------- policy-pap | allow.auto.create.topics = true kafka | [2024-04-25 14:22:58,714] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.015941287Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-25 14:22:58,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.017111793Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.169926ms policy-db-migrator | policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T14:22:28.02060382Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-04-25 14:22:58,727] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(Z-ljZKLXR-y1QhXAaAKdbg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | auto.offset.reset = latest kafka | [2024-04-25 14:22:58,728] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T14:22:28.021584704Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=978.954µs policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-25 14:22:58,729] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.025659419Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | check.crcs = true kafka | [2024-04-25 14:22:58,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.026696743Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.036304ms policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.029968408Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-db-migrator | policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.031017832Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.048814ms policy-db-migrator | policy-pap | client.rack = kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.034948846Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.040855156Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.90538ms policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.047251002Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | enable.auto.commit = true kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.048673531Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.422349ms policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true kafka | [2024-04-25 14:22:58,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.056421827Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.05733569Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=916.303µs policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T14:22:28.062027123Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-25T14:22:28.091286801Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.259398ms kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-25T14:22:28.095510808Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-25T14:22:28.121830377Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.317579ms kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-25T14:22:28.125681029Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" kafka | [2024-04-25 14:22:58,732] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T14:22:28.126355648Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=673.699µs kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-25T14:22:28.130073159Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-25T14:22:28.130803018Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=728.629µs kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-25T14:22:28.136920581Z level=info msg="Executing migration" id="add current_reason column related to current_state" kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-25T14:22:28.142546098Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.625167ms kafka | [2024-04-25 14:22:58,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-25T14:22:28.146042685Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" kafka | [2024-04-25 14:22:58,733] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:28.152510344Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.467319ms kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-04-25T14:22:28.159099093Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:28.159978914Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=878.841µs kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T14:22:28.164842201Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" kafka | [2024-04-25 14:22:58,735] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T14:22:28.165784463Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=936.962µs kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T14:22:28.170937824Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:28.172630497Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.689994ms kafka | [2024-04-25 14:22:58,736] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-25T14:22:28.208038578Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-25T14:22:28.209934734Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.893896ms kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T14:22:28.215590421Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T14:22:28.215657502Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.661µs kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:28.222461454Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:28.230732686Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.275432ms kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:28.235485361Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T14:22:28.239705409Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.219368ms kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T14:22:28.242954933Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:28.252974999Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.013335ms kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-25 14:22:58,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-25T14:22:28.261503744Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:28.262716351Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.209787ms kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T14:22:28.309485178Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:28.311085569Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.600381ms kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:28.315509589Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T14:22:28.32593819Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=10.429361ms kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T14:22:28.336770668Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:28.343809114Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.038186ms kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.348935814Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.349927837Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=991.793µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-25 14:22:58,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.355508323Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | -------------- policy-pap | security.providers = null grafana | logger=migrator t=2024-04-25T14:22:28.362291035Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.782742ms kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-25T14:22:28.369672396Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-25T14:22:28.376057472Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.384226ms kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:28.43542156Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-04-25 14:22:58,740] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:28.435706613Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=286.413µs kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-04-25T14:22:28.441962678Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:28.443930536Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.963967ms kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T14:22:28.450675597Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T14:22:28.451828412Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.152675ms kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T14:22:28.455800107Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.456923472Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.123405ms policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.461750597Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.462024161Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=273.304µs policy-pap | ssl.key.password = null policy-db-migrator | kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.470554487Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | kafka | [2024-04-25 14:22:58,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.479436347Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.88255ms policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.485433369Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | ssl.keystore.key = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.491823746Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.376647ms policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.494876428Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | ssl.keystore.password = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.502574782Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.697834ms policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.508764057Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.515256724Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.488597ms policy-db-migrator | policy-pap | ssl.provider = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.518877523Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.524992486Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.110773ms policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.528626297Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | ssl.truststore.certificates = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.528686308Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=60.161µs policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.533693226Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | policy-pap | ssl.truststore.password = null kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.534879412Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.182366ms policy-db-migrator | policy-pap | ssl.truststore.type = JKS kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.543061723Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 14:22:58,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.553558686Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.499063ms policy-db-migrator | -------------- policy-pap | kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.558106607Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.558153768Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=47.511µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.563159046Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | policy-pap | [2024-04-25T14:22:58.109+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978109 kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.570646938Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.487032ms policy-db-migrator | policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.578102759Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:28.579242165Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.138067ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=9514907f-d028-45fc-9240-ae8706efbfe3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-25T14:22:28.58551117Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b957469a-2969-4bff-8555-1bfe3e4d4da0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-25T14:22:28.59290987Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.39889ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.110+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c9f2f9e4-219a-4a9f-8132-76e678fa712c, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-25T14:22:28.597836247Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.125+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-25T14:22:28.598698359Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=861.652µs policy-db-migrator | kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | acks = -1 grafana | logger=migrator t=2024-04-25T14:22:28.627271718Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T14:22:28.628955391Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.683363ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-04-25T14:22:28.660017953Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T14:22:28.670270782Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.253529ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-04-25T14:22:28.675638055Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | kafka | [2024-04-25 14:22:58,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T14:22:28.676407686Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=770.531µs policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-04-25 14:22:58,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.id = producer-1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:28.682113823Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | compression.type = none policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T14:22:28.683097436Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=981.273µs kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:28.687455255Z level=info msg="Executing migration" id="create alert_image table" kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-25T14:22:28.688397298Z level=info msg="Migration successfully executed" id="create alert_image table" duration=941.833µs policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T14:22:28.696986075Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | kafka | [2024-04-25 14:22:58,748] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T14:22:28.699168975Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=2.18253ms policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-04-25 14:22:58,749] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-25T14:22:28.707389427Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | -------------- policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:28.707457488Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=67.981µs kafka | [2024-04-25 14:22:58,749] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-25T14:22:28.759966691Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-04-25 14:22:58,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-25T14:22:28.761577353Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.625742ms kafka | [2024-04-25 14:22:58,750] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:28.768309964Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-04-25 14:22:58,799] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-25T14:22:28.77016014Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.850486ms kafka | [2024-04-25 14:22:58,809] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T14:22:28.774265935Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-25 14:22:58,812] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T14:22:28.775276009Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-25 14:22:58,813] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T14:22:28.781621395Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-04-25 14:22:58,814] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UDjaTEkFR6iaxHll2hUQXA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:28.782230734Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=608.958µs kafka | [2024-04-25 14:22:58,834] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-25T14:22:28.788244946Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-04-25 14:22:58,842] INFO [Broker id=1] Finished LeaderAndIsr request in 128ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-25T14:22:28.789919359Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.674054ms kafka | [2024-04-25 14:22:58,845] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=UDjaTEkFR6iaxHll2hUQXA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-25T14:22:28.79369658Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-04-25T14:22:28.80031499Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.61775ms kafka | [2024-04-25 14:22:58,854] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-25T14:22:28.805239167Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-04-25 14:22:58,855] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-25T14:22:28.806299611Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.059674ms policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T14:22:28.811715665Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T14:22:28.81283807Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.122025ms policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:28.818600898Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-04-25T14:22:28.820759248Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=2.15723ms policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:28.82757247Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:28.828624025Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.051265ms policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T14:22:28.835204214Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T14:22:28.837647277Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=2.441793ms policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:28.842287751Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T14:22:28.842327481Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=40.9µs policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:28.846159593Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:28.846328065Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=167.902µs policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:28.851602727Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-25T14:22:28.852217145Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=614.508µs policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:28.859042858Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T14:22:28.860859262Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.817424ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T14:22:28.866676371Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T14:22:28.867644295Z level=info msg="Migration successfully executed" id="create secrets table" duration=967.964µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:28.873413874Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:28.908104955Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.692161ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:28.912820229Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:28.917885698Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.064799ms policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T14:22:28.92170627Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T14:22:28.921934963Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=228.043µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T14:22:28.926455035Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T14:22:28.960618089Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.162684ms policy-db-migrator | kafka | [2024-04-25 14:22:58,857] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T14:22:28.966450149Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-04-25 14:22:59,013] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:28.99820281Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.752771ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-04-25T14:22:29.023654895Z level=info msg="Executing migration" id="create kv_store table v1" kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:29.02545985Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.804735ms kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:29.034544224Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:29.035670719Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.125005ms kafka | [2024-04-25 14:22:59,014] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-04-25T14:22:29.041880894Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:29.042253459Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=372.385µs kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.providers = null policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-04-25T14:22:29.046406566Z level=info msg="Executing migration" id="create permission table" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:29.047295717Z level=info msg="Migration successfully executed" id="create permission table" duration=888.332µs kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:29.05415155Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T14:22:29.055734442Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.586132ms policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T14:22:29.060501317Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T14:22:29.063065642Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.565685ms policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T14:22:29.074972724Z level=info msg="Executing migration" id="create role table" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-25T14:22:29.076062408Z level=info msg="Migration successfully executed" id="create role table" duration=1.086934ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T14:22:29.082668079Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T14:22:29.092218068Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.561249ms policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T14:22:29.097391119Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T14:22:29.104829219Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.43722ms policy-db-migrator | policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T14:22:29.111646772Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-04-25 14:22:59,015] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T14:22:29.112792727Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.146895ms kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-04-25T14:22:29.120569133Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-25T14:22:29.122313107Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.743414ms policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T14:22:29.126661516Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:29.127822662Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.160946ms kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T14:22:29.273671505Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T14:22:29.276115658Z level=info msg="Migration successfully executed" id="create team role table" duration=2.445553ms kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T14:22:29.370345988Z level=info msg="Executing migration" id="add index team_role.org_id" policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:29.372299765Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.955917ms kafka | [2024-04-25 14:22:59,016] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T14:22:29.437070576Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | transactional.id = null grafana | logger=migrator t=2024-04-25T14:22:29.439394007Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.322681ms policy-db-migrator | kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T14:22:29.577285032Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | grafana | logger=migrator t=2024-04-25T14:22:29.579179108Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.901137ms policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T14:22:58.138+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-25T14:22:29.754719363Z level=info msg="Executing migration" id="create user role table" policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T14:22:29.756806351Z level=info msg="Migration successfully executed" id="create user role table" duration=2.089718ms policy-db-migrator | kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T14:22:29.865616511Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T14:22:58.155+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978155 grafana | logger=migrator t=2024-04-25T14:22:29.86775684Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.13367ms policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | [2024-04-25T14:22:58.156+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c9f2f9e4-219a-4a9f-8132-76e678fa712c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:29.89348161Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:58.156+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f8b2bf2-7d12-44fd-abe8-8a8ee96c9ee3, alive=false, publisher=null]]: starting kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:29.895526058Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.042178ms policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-pap | [2024-04-25T14:22:58.157+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:29.99431916Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | -------------- policy-pap | acks = -1 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:29.996555151Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.239731ms policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.045444995Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | policy-pap | batch.size = 16384 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.047006067Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.559642ms policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.296081852Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | -------------- policy-pap | buffer.memory = 33554432 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.297915317Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.834024ms policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.590822777Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | -------------- policy-pap | client.id = producer-2 kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.59318455Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.365422ms policy-db-migrator | policy-pap | compression.type = none kafka | [2024-04-25 14:22:59,017] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.660195801Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.671609336Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.411695ms policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-pap | delivery.timeout.ms = 120000 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.674687258Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | -------------- policy-pap | enable.idempotence = true kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.675784832Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.092984ms policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T14:22:30.68149837Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | JOIN pdpstatistics b policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.682668717Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.169287ms policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-pap | linger.ms = 0 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.687131657Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | SET a.id = b.id policy-pap | max.block.ms = 60000 kafka | [2024-04-25 14:22:59,018] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.688259143Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.127246ms policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.694781401Z level=info msg="Executing migration" id="add unique index role.uid" policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-25T14:22:30.696976611Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.20109ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.705194093Z level=info msg="Executing migration" id="create seed assignment table" policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-25T14:22:30.707069777Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.878054ms policy-pap | metric.reporters = [] kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.711862653Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-pap | metrics.num.samples = 2 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-04-25T14:22:30.712950208Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.087185ms policy-pap | metrics.recording.level = INFO kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.717452409Z level=info msg="Executing migration" id="add column hidden to role table" policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.725488298Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.035479ms policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.731747993Z level=info msg="Executing migration" id="permission kind migration" policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.743361931Z level=info msg="Migration successfully executed" id="permission kind migration" duration=11.617468ms policy-pap | partitioner.class = null policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.748868036Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | partitioner.ignore.keys = false kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.754569673Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.701407ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-25 14:22:59,018] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.758019761Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.765874377Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.854126ms policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.809769543Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:30.812105095Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=2.334852ms policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-04-25T14:22:30.818722645Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T14:22:30.820208506Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.48467ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:30.829196817Z level=info msg="Executing migration" id="remove permission role_id action scope index" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T14:22:30.830304182Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.110265ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T14:22:30.834372088Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T14:22:30.835440022Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.067334ms policy-db-migrator | kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T14:22:30.839078362Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T14:22:30.840210567Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.131315ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T14:22:30.846932148Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T14:22:30.84707247Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=139.802µs policy-db-migrator | -------------- policy-pap | sasl.login.class = null kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.85067954Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.85072895Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=51µs policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.855080929Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.85584516Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=763.811µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.862699943Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.863863778Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.165835ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-25 14:22:59,019] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.869932701Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.871181248Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.255007ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.877456433Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.87794804Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=491.976µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.882910647Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.883459674Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=548.877µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.886899842Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.888018477Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.118115ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.892028721Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.893903667Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.874556ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.900655029Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.909516169Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.8605ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-25T14:22:30.913674195Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.913742406Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=71.031µs policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | security.providers = null grafana | logger=migrator t=2024-04-25T14:22:30.918898996Z level=info msg="Executing migration" id="create correlation table v1" policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-25T14:22:30.920986655Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.070019ms kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T14:22:30.928568187Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T14:22:30.929679873Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.111756ms kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T14:22:30.934058822Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-25 14:22:59,020] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:30.937998226Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=3.934144ms policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-04-25T14:22:30.942092651Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-04-25 14:22:59,020] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.948678091Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.58486ms kafka | [2024-04-25 14:22:59,021] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-04-25T14:22:30.953741999Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.954813335Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.071436ms kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.959183613Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.960275049Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.094216ms kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | > upgrade 0140-toscaparameter.sql grafana | logger=migrator t=2024-04-25T14:22:30.965082574Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.9890671Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.982506ms kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | DROP TABLE IF EXISTS toscaparameter grafana | logger=migrator t=2024-04-25T14:22:30.991942709Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:30.992852131Z level=info msg="Migration successfully executed" id="create correlation v2" duration=908.542µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.995616289Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:30.996363249Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=746.771µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0150-toscaproperty.sql grafana | logger=migrator t=2024-04-25T14:22:31.001674982Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.002510603Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=835.421µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints grafana | logger=migrator t=2024-04-25T14:22:31.008418593Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.009219384Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=800.51µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:31.014224942Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.014469515Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=245.043µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata grafana | logger=migrator t=2024-04-25T14:22:31.020932582Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.022230641Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.296569ms kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:31.026022322Z level=info msg="Executing migration" id="add provisioning column" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.157+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.037040762Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.0186ms kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty grafana | logger=migrator t=2024-04-25T14:22:31.041384371Z level=info msg="Executing migration" id="create entity_events table" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.042278553Z level=info msg="Migration successfully executed" id="create entity_events table" duration=891.382µs kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714054978161 policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:31.047127869Z level=info msg="Executing migration" id="create dashboard public config v1" kafka | [2024-04-25 14:22:59,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9f8b2bf2-7d12-44fd-abe8-8a8ee96c9ee3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:31.048185793Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.058124ms kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-04-25T14:22:31.070907702Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.161+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.071683213Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.163+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=migrator t=2024-04-25T14:22:31.078348523Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.168+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.078815709Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | grafana | logger=migrator t=2024-04-25T14:22:31.084564197Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T14:22:31.085862266Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.300019ms policy-pap | [2024-04-25T14:22:58.178+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.089808889Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-pap | [2024-04-25T14:22:58.179+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.091539723Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.727114ms policy-pap | [2024-04-25T14:22:58.180+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.098912942Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-04-25T14:22:58.183+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.100040738Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.128086ms policy-pap | [2024-04-25T14:22:58.186+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.104696801Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | [2024-04-25T14:22:58.191+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.626 seconds (process running for 10.325) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.106614557Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.917086ms policy-pap | [2024-04-25T14:22:58.571+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.11271873Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-pap | [2024-04-25T14:22:58.571+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.114018618Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.299688ms policy-pap | [2024-04-25T14:22:58.572+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.120645208Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | [2024-04-25T14:22:58.573+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.122331981Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.686583ms policy-pap | [2024-04-25T14:22:58.663+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.126963364Z level=info msg="Executing migration" id="Drop public config table" policy-pap | [2024-04-25T14:22:58.664+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Cluster ID: lFyKLv7sTJO7XXtTZrPgZw policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.128263512Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.299529ms policy-pap | [2024-04-25T14:22:58.683+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.131540356Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-pap | [2024-04-25T14:22:58.702+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.132656671Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.116115ms policy-pap | [2024-04-25T14:22:58.702+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.137325495Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-pap | [2024-04-25T14:22:58.785+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.138368288Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.041793ms policy-pap | [2024-04-25T14:22:58.846+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.142023629Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | [2024-04-25T14:22:59.836+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.143127543Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.103684ms policy-pap | [2024-04-25T14:22:59.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.148226233Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | policy-pap | [2024-04-25T14:22:59.871+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.149323197Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.096644ms policy-db-migrator | policy-pap | [2024-04-25T14:22:59.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-25 14:22:59,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.154601829Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-04-25T14:22:59.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-25 14:22:59,024] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.178869849Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.26906ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:59.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-25 14:22:59,028] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.181473975Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-04-25T14:22:59.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] (Re-)joining group kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.189429293Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.954568ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Request joining group due to: need to re-join with the given member-id: consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.196053513Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.204996434Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.942281ms policy-db-migrator | msg policy-pap | [2024-04-25T14:22:59.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] (Re-)joining group kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.208765676Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | upgrade to 1100 completed policy-pap | [2024-04-25T14:23:02.894+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a', protocol='range'} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.209007479Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=239.563µs policy-db-migrator | policy-pap | [2024-04-25T14:23:02.904+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167', protocol='range'} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.213303337Z level=info msg="Executing migration" id="add share column" policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | [2024-04-25T14:23:02.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.221704421Z level=info msg="Migration successfully executed" id="add share column" duration=8.400564ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:02.907+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Finished assignment for group at generation 1: {consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.225080527Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | [2024-04-25T14:23:02.947+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167', protocol='range'} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.22529311Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=212.083µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:02.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.230386729Z level=info msg="Executing migration" id="create file table" policy-db-migrator | policy-pap | [2024-04-25T14:23:02.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a', protocol='range'} kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.231329652Z level=info msg="Migration successfully executed" id="create file table" duration=942.323µs policy-db-migrator | policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-04-25T14:22:31.235144124Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.236698985Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.55297ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:02.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.241946166Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-pap | [2024-04-25T14:23:02.996+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.243918493Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.971687ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:02.997+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.25030005Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | policy-pap | [2024-04-25T14:23:03.032+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.251080241Z level=info msg="Migration successfully executed" id="create file_meta table" duration=779.872µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:03.032+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3, groupId=b957469a-2969-4bff-8555-1bfe3e4d4da0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.258600002Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-04-25T14:23:04.634+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.260339826Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.739314ms policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:04.634+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.265813241Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | policy-pap | [2024-04-25T14:23:04.637+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.265878732Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=65.911µs policy-db-migrator | policy-pap | [2024-04-25T14:23:19.405+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.270175269Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-pap | [] kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.27023964Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=64.611µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:19.406+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 14:22:59,028] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.276429435Z level=info msg="Executing migration" id="managed permissions migration" policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.277283027Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=856.732µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:19.406+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.282324745Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"550bf7bf-5870-4f3b-a328-6d7a1c64d750","timestampMs":1714054999364,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.28265198Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=327.635µs policy-db-migrator | -------------- policy-pap | [2024-04-25T14:23:19.417+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.286123787Z level=info msg="Executing migration" id="RBAC action name migrator" policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-pap | [2024-04-25T14:23:19.501+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.288180825Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.056719ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.502+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting listener grafana | logger=migrator t=2024-04-25T14:22:31.291086084Z level=info msg="Executing migration" id="Add UID column to playlist" policy-db-migrator | kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.502+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting timer grafana | logger=migrator t=2024-04-25T14:22:31.300053406Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.966502ms policy-db-migrator | kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.503+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] grafana | logger=migrator t=2024-04-25T14:22:31.305837325Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting enqueue policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.305997197Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=160.153µs policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.313062573Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | [2024-04-25T14:23:19.505+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate started policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.314903748Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.844005ms policy-pap | [2024-04-25T14:23:19.507+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.319003694Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.319693253Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=691.179µs policy-pap | [2024-04-25T14:23:19.544+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.327708112Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.328038847Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=331.495µs policy-pap | [2024-04-25T14:23:19.545+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.332227424Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | [2024-04-25T14:23:19.547+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | kafka | [2024-04-25 14:22:59,029] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.332993064Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=765.44µs policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","timestampMs":1714054999480,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.336337209Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-pap | [2024-04-25T14:23:19.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.345763597Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.425678ms policy-pap | [2024-04-25T14:23:19.567+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.351722119Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.360708371Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.986102ms policy-pap | [2024-04-25T14:23:19.568+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.367523843Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | [2024-04-25T14:23:19.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:31.368281803Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=758.02µs policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.569+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping grafana | logger=migrator t=2024-04-25T14:22:31.37096214Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-04-25T14:22:31.443782489Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=72.821129ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping timer grafana | logger=migrator t=2024-04-25T14:22:31.448855778Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-db-migrator | kafka | [2024-04-25 14:22:59,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] grafana | logger=migrator t=2024-04-25T14:22:31.449652399Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=796.251µs policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping listener grafana | logger=migrator t=2024-04-25T14:22:31.452697681Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-04-25T14:23:19.570+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopped grafana | logger=migrator t=2024-04-25T14:22:31.453517512Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=819.321µs policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | [2024-04-25T14:23:19.573+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-25T14:22:31.459442842Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"e4357cfc-69c4-4da1-be8c-522d64c2326f","timestampMs":1714054999558,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-25T14:22:31.485638908Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.197416ms policy-db-migrator | kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate successful grafana | logger=migrator t=2024-04-25T14:22:31.580385576Z level=info msg="Executing migration" id="add origin column to seed_assignment" policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 start publishing next request grafana | logger=migrator t=2024-04-25T14:22:31.591805261Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=11.421235ms policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting grafana | logger=migrator t=2024-04-25T14:22:31.596838729Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-04-25T14:23:19.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting listener grafana | logger=migrator t=2024-04-25T14:22:31.597159824Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=321.355µs policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting timer grafana | logger=migrator t=2024-04-25T14:22:31.603004413Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-db-migrator | kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] grafana | logger=migrator t=2024-04-25T14:22:31.603180335Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=176.872µs policy-db-migrator | kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange starting enqueue grafana | logger=migrator t=2024-04-25T14:22:31.609840176Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange started policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.610167181Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=325.295µs policy-pap | [2024-04-25T14:23:19.579+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] policy-db-migrator | DROP TABLE statistics_sequence kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.614319217Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | [2024-04-25T14:23:19.580+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.614651501Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=332.824µs policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 14:22:59,042] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.620091605Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-pap | [2024-04-25T14:23:19.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.62043289Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=341.755µs policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | name version grafana | logger=migrator t=2024-04-25T14:22:31.624887681Z level=info msg="Executing migration" id="create folder table" policy-pap | [2024-04-25T14:23:19.681+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | policyadmin 1300 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.626335121Z level=info msg="Migration successfully executed" id="create folder table" duration=1.447191ms policy-pap | [2024-04-25T14:23:19.686+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.632850179Z level=info msg="Executing migration" id="Add index for parent_uid" policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.634803456Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.952947ms policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.642704292Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.644663109Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.958347ms policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f38f5279-b344-4d66-86a2-21ebfb9d4e55","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c12a1405-928e-4481-a978-984303d383c8","timestampMs":1714054999559,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.64914371Z level=info msg="Executing migration" id="Update folder title length" policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping enqueue kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.649171741Z level=info msg="Migration successfully executed" id="Update folder title length" duration=28.871µs policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping timer kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.655622678Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f38f5279-b344-4d66-86a2-21ebfb9d4e55 kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.656870225Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.246237ms policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.688+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.660835519Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopping listener kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.662524892Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.689363ms policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange stopped kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.666562637Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.689+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpStateChange successful kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.667778013Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.215676ms policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:17 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 start publishing next request kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.674062389Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.674577556Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=514.577µs policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting listener kafka | [2024-04-25 14:22:59,043] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.678738832Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 policy-pap | [2024-04-25T14:23:19.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting timer kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.679162858Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=423.746µs policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=92ed1daf-00dc-46f3-a934-a5b206758853, expireMs=1714055029691] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.684087915Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:18 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate starting enqueue kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.685809559Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.721414ms policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 policy-pap | [2024-04-25T14:23:19.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate started kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.690980579Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 policy-pap | [2024-04-25T14:23:19.692+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.692384378Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.402879ms policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:19 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.697907743Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 policy-pap | [2024-04-25T14:23:19.694+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-25T14:22:31.698969747Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.061924ms policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","timestampMs":1714054999481,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:31.70210637Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:20 policy-pap | [2024-04-25T14:23:19.694+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-04-25T14:22:31.703338897Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.231867ms kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 policy-pap | [2024-04-25T14:23:19.700+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-25T14:22:31.708184443Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"ecd1b796-0fe4-44b0-a7d5-d9c405fda44a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"43df1ced-174a-4736-a982-b481c53f90de","timestampMs":1714054999595,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:31.709995567Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.811844ms kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:21 policy-pap | [2024-04-25T14:23:19.700+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ecd1b796-0fe4-44b0-a7d5-d9c405fda44a grafana | logger=migrator t=2024-04-25T14:22:31.713912801Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 grafana | logger=migrator t=2024-04-25T14:22:31.715498212Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.584691ms policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:22 grafana | logger=migrator t=2024-04-25T14:22:31.721516274Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 grafana | logger=migrator t=2024-04-25T14:22:31.72271337Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.194666ms policy-pap | {"source":"pap-b43ecee3-a99c-4739-8071-6199e9c3e680","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"92ed1daf-00dc-46f3-a934-a5b206758853","timestampMs":1714054999658,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T14:22:31.728005192Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | [2024-04-25T14:23:19.703+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 grafana | logger=migrator t=2024-04-25T14:22:31.729246288Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.241186ms policy-pap | [2024-04-25T14:23:19.704+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:23 grafana | logger=migrator t=2024-04-25T14:22:31.73747377Z level=info msg="Executing migration" id="create signing_key table" policy-pap | [2024-04-25T14:23:19.718+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 14:22:59,044] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 grafana | logger=migrator t=2024-04-25T14:22:31.738548436Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.016055ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:59,049] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 grafana | logger=migrator t=2024-04-25T14:22:31.745157045Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 14:22:59,057] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:24 grafana | logger=migrator t=2024-04-25T14:22:31.746879849Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.717044ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"92ed1daf-00dc-46f3-a934-a5b206758853","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9306b898-b6a4-4c63-98db-745073d13a5b","timestampMs":1714054999708,"name":"apex-80274cf2-35d0-404d-a495-e62b89ee6834","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 14:22:59,068] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 grafana | logger=migrator t=2024-04-25T14:22:31.753821973Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping kafka | [2024-04-25 14:22:59,071] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 grafana | logger=migrator t=2024-04-25T14:22:31.754926977Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.104714ms policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 92ed1daf-00dc-46f3-a934-a5b206758853 kafka | [2024-04-25 14:22:59,071] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:25 grafana | logger=migrator t=2024-04-25T14:22:31.760223009Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping enqueue kafka | [2024-04-25 14:22:59,072] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 grafana | logger=migrator t=2024-04-25T14:22:31.760666796Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=445.167µs policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping timer kafka | [2024-04-25 14:22:59,072] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 grafana | logger=migrator t=2024-04-25T14:22:31.764504908Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=92ed1daf-00dc-46f3-a934-a5b206758853, expireMs=1714055029691] kafka | [2024-04-25 14:22:59,084] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:26 grafana | logger=migrator t=2024-04-25T14:22:31.775185723Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.681435ms kafka | [2024-04-25 14:22:59,085] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopping listener policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.778797242Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-04-25 14:22:59,085] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:19.719+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate stopped policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.77935832Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=561.777µs kafka | [2024-04-25 14:22:59,085] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:19.723+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 PdpUpdate successful policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.78531023Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-25 14:22:59,085] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T14:23:19.723+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80274cf2-35d0-404d-a495-e62b89ee6834 has no more requests policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.786441866Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.131206ms kafka | [2024-04-25 14:22:59,094] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T14:23:25.181+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.79185952Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-04-25 14:22:59,094] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T14:23:25.229+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:27 grafana | logger=migrator t=2024-04-25T14:22:31.794078659Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.219169ms kafka | [2024-04-25 14:22:59,094] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:25.239+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.797878592Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-25 14:22:59,095] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:25.250+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.799173509Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.295577ms kafka | [2024-04-25 14:22:59,095] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T14:23:25.657+00:00|INFO|SessionData|http-nio-6969-exec-5] unknown group testGroup policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.80583793Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-04-25 14:22:59,105] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T14:23:26.155+00:00|INFO|SessionData|http-nio-6969-exec-5] create cached group testGroup grafana | logger=migrator t=2024-04-25T14:22:31.807220338Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.384118ms kafka | [2024-04-25 14:22:59,106] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T14:23:26.156+00:00|INFO|SessionData|http-nio-6969-exec-5] creating DB group testGroup policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.811653199Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-04-25 14:22:59,106] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:26.773+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.812825924Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.172345ms policy-pap | [2024-04-25T14:23:26.990+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-04-25 14:22:59,106] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.820975835Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | [2024-04-25T14:23:27.081+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-04-25 14:22:59,106] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.822633368Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.654233ms policy-pap | [2024-04-25T14:23:27.081+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup kafka | [2024-04-25 14:22:59,112] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.832911677Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | [2024-04-25T14:23:27.082+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup kafka | [2024-04-25 14:22:59,112] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,113] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.834102644Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.191997ms policy-pap | [2024-04-25T14:23:27.096+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T14:23:26Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T14:23:27Z, user=policyadmin)] kafka | [2024-04-25 14:22:59,113] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T14:22:31.837827474Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | [2024-04-25T14:23:27.813+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup kafka | [2024-04-25 14:22:59,113] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.83823581Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=409.336µs policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.842907063Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-04-25 14:22:59,121] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T14:23:27.814+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.843012865Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=106.922µs kafka | [2024-04-25 14:22:59,121] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 kafka | [2024-04-25 14:22:59,121] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:27.815+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup grafana | logger=migrator t=2024-04-25T14:22:31.850195922Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-04-25 14:22:59,121] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:27.825+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T14:23:27Z, user=policyadmin)] policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.864491607Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=14.294865ms kafka | [2024-04-25 14:22:59,121] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.869696867Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-04-25 14:22:59,132] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.88161293Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.916833ms kafka | [2024-04-25 14:22:59,132] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 grafana | logger=migrator t=2024-04-25T14:22:31.88613232Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" kafka | [2024-04-25 14:22:59,132] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-04-25T14:22:31.886468375Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=333.775µs kafka | [2024-04-25 14:22:59,132] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T14:22:31.891437793Z level=info msg="migrations completed" performed=548 skipped=0 duration=15.991739147s policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup kafka | [2024-04-25 14:22:59,132] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 policy-pap | [2024-04-25T14:23:28.139+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup grafana | logger=sqlstore t=2024-04-25T14:22:31.904799854Z level=info msg="Created default admin" user=admin kafka | [2024-04-25 14:22:59,138] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 policy-pap | [2024-04-25T14:23:28.179+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T14:23:28Z, user=policyadmin)] grafana | logger=sqlstore t=2024-04-25T14:22:31.904998867Z level=info msg="Created default organization" kafka | [2024-04-25 14:22:59,139] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:28 policy-pap | [2024-04-25T14:23:48.798+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup grafana | logger=secrets t=2024-04-25T14:22:31.909885874Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-04-25 14:22:59,139] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 policy-pap | [2024-04-25T14:23:48.800+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup grafana | logger=plugin.store t=2024-04-25T14:22:31.929995737Z level=info msg="Loading plugins..." kafka | [2024-04-25 14:22:59,139] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 policy-pap | [2024-04-25T14:23:49.504+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f38f5279-b344-4d66-86a2-21ebfb9d4e55, expireMs=1714055029503] grafana | logger=local.finder t=2024-04-25T14:22:31.968964036Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-04-25 14:22:59,139] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 policy-pap | [2024-04-25T14:23:49.579+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=ecd1b796-0fe4-44b0-a7d5-d9c405fda44a, expireMs=1714055029579] grafana | logger=plugin.store t=2024-04-25T14:22:31.968992887Z level=info msg="Plugins loaded" count=55 duration=38.99622ms kafka | [2024-04-25 14:22:59,145] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 grafana | logger=query_data t=2024-04-25T14:22:31.979781384Z level=info msg="Query Service initialization" kafka | [2024-04-25 14:22:59,145] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 grafana | logger=live.push_http t=2024-04-25T14:22:31.98616887Z level=info msg="Live Push Gateway initialization" kafka | [2024-04-25 14:22:59,145] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:29 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.022857438Z level=info msg=Starting kafka | [2024-04-25 14:22:59,145] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.023620158Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-25 14:22:59,145] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T14:22:32.024346429Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-25 14:22:59,155] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T14:22:32.026051251Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-04-25 14:22:59,156] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=ngalert.migration t=2024-04-25T14:22:32.028478494Z level=info msg="Completed alerting migration" kafka | [2024-04-25 14:22:59,156] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.061878078Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-04-25 14:22:59,156] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=infra.usagestats.collector t=2024-04-25T14:22:32.063636933Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-04-25 14:22:59,156] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:30 grafana | logger=provisioning.datasources t=2024-04-25T14:22:32.066430991Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-04-25 14:22:59,172] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 grafana | logger=provisioning.alerting t=2024-04-25T14:22:32.083408981Z level=info msg="starting to provision alerting" kafka | [2024-04-25 14:22:59,173] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 grafana | logger=provisioning.alerting t=2024-04-25T14:22:32.083427021Z level=info msg="finished to provision alerting" policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,174] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=grafanaStorageLogger t=2024-04-25T14:22:32.084454745Z level=info msg="Storage starting" policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,174] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.084431745Z level=info msg="Warming state cache for startup" policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,174] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T14:22:32.086062747Z level=info msg="Starting MultiOrg Alertmanager" policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,186] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=http.server t=2024-04-25T14:22:32.087643968Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,187] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=provisioning.dashboard t=2024-04-25T14:22:32.152673122Z level=info msg="starting to provision dashboards" policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,187] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=ngalert.state.manager t=2024-04-25T14:22:32.158436481Z level=info msg="State cache has been initialized" states=0 duration=74.003286ms policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,187] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.scheduler t=2024-04-25T14:22:32.158499002Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,187] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ticker t=2024-04-25T14:22:32.158581623Z level=info msg=starting first_tick=2024-04-25T14:22:40Z policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,203] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=plugins.update.checker t=2024-04-25T14:22:32.202277286Z level=info msg="Update check succeeded" duration=118.430689ms policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,205] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.236665473Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:31 kafka | [2024-04-25 14:22:59,205] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) grafana | logger=grafana.update.checker t=2024-04-25T14:22:32.237954551Z level=info msg="Update check succeeded" duration=154.199075ms policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,205] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.247269797Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,209] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.257998013Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241422170800u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,217] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.268916611Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,218] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore.transactions t=2024-04-25T14:22:32.280835963Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,218] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=plugin.signature.key_retriever t=2024-04-25T14:22:32.300190946Z level=error msg="Error downloading plugin manifest keys" error="kv set: database is locked" policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,218] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=grafana-apiserver t=2024-04-25T14:22:32.337604154Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,218] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=grafana-apiserver t=2024-04-25T14:22:32.338121862Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,230] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=provisioning.dashboard t=2024-04-25T14:22:32.440436222Z level=info msg="finished to provision dashboards" policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,234] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=infra.usagestats t=2024-04-25T14:23:33.097239688Z level=info msg="Usage stats are ready to report" policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,234] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,234] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,234] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,246] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,247] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,247] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241422170900u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,248] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:32 kafka | [2024-04-25 14:22:59,248] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,255] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,256] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,256] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,256] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,256] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,266] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,267] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241422171000u 1 2024-04-25 14:22:33 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241422171100u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,267] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,267] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,267] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,275] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241422171200u 1 2024-04-25 14:22:33 kafka | [2024-04-25 14:22:59,276] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 kafka | [2024-04-25 14:22:59,276] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 kafka | [2024-04-25 14:22:59,276] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241422171300u 1 2024-04-25 14:22:34 kafka | [2024-04-25 14:22:59,276] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-04-25 14:22:59,283] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,284] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,284] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,284] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,284] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,294] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,295] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,295] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,295] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,295] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,301] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,302] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,302] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,302] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,302] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,359] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,360] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,360] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,360] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,360] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,367] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,368] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,368] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,368] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,368] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,380] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,381] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,381] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,381] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,381] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,450] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,451] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,451] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,451] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,451] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,458] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,458] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,458] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,458] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,458] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,467] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,467] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,467] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,467] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,467] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,477] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,477] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,478] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,478] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,478] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,488] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,489] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,489] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,489] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,489] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,502] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,503] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,503] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,503] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,503] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,510] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,511] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,511] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,511] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,511] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,522] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,523] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,523] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,523] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,524] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,533] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,536] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,536] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,536] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,536] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,545] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,545] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,545] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,546] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,546] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,558] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,559] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,559] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,559] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,559] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,570] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,570] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,570] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,571] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,571] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,577] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,578] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,578] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,578] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,583] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,594] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,595] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,595] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,595] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,595] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,610] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,611] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,611] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,611] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,612] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,621] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,622] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,622] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,622] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,622] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,633] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,633] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,633] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,633] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,634] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,641] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,641] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,641] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,641] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,642] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,684] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,685] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,685] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,685] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,685] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,693] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,694] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,694] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,694] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,694] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,703] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,703] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,703] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,703] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,704] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,711] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,711] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,712] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,712] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,712] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,722] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,723] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,723] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,723] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,723] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,733] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,735] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,735] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,735] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,736] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,747] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 14:22:59,748] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 14:22:59,748] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,748] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 14:22:59,748] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Z-ljZKLXR-y1QhXAaAKdbg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-04-25 14:22:59,796] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-04-25 14:22:59,797] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-25 14:22:59,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,800] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,802] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,803] INFO [Broker id=1] Finished LeaderAndIsr request in 775ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-04-25 14:22:59,805] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Z-ljZKLXR-y1QhXAaAKdbg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 14:22:59,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,809] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,809] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,812] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,813] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,813] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 14:22:59,863] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,878] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce in Empty state. Created a new member id consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,881] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,882] INFO [GroupCoordinator 1]: Preparing to rebalance group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,896] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b957469a-2969-4bff-8555-1bfe3e4d4da0 in Empty state. Created a new member id consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:22:59,902] INFO [GroupCoordinator 1]: Preparing to rebalance group b957469a-2969-4bff-8555-1bfe3e4d4da0 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,892] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,896] INFO [GroupCoordinator 1]: Stabilized group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,903] INFO [GroupCoordinator 1]: Stabilized group b957469a-2969-4bff-8555-1bfe3e4d4da0 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,926] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-7d6a46fb-ce9a-48ca-aab9-0c5f15e0232a for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,928] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b957469a-2969-4bff-8555-1bfe3e4d4da0-3-56fea4ad-7919-46ce-ba06-2844c28fe167 for group b957469a-2969-4bff-8555-1bfe3e4d4da0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 14:23:02,928] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5f0ab5d6-63b3-4b5a-a200-3d330f0096ce-2-35a6a127-a715-4075-8b68-3fa09af1055b for group 5f0ab5d6-63b3-4b5a-a200-3d330f0096ce for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping grafana ... Stopping kafka ... Stopping policy-api ... Stopping mariadb ... Stopping simulator ... Stopping zookeeper ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing grafana ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing mariadb ... Removing simulator ... Removing zookeeper ... Removing prometheus ... Removing policy-pap ... done Removing policy-api ... done Removing policy-apex-pdp ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing prometheus ... done Removing simulator ... done Removing zookeeper ... done Removing grafana ... done Removing kafka ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.9uiB25C2Gx ]] + rsync -av /tmp/tmp.9uiB25C2Gx/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,987 bytes received 95 bytes 1,838,164.00 bytes/sec total size is 918,445 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2190 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11193931219247660130.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5124674265485507533.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2245818625615984182.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3900029720496613243.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config7657330470754527008tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7986511254064810390.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16931165663722923543.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9846893542837962052.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2816951711083229067.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins7023004629369347686.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cpZN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-cpZN/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1663 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-27901 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 846 25375 0 5944 30864 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:ff:97:78 brd ff:ff:ff:ff:ff:ff inet 10.30.106.248/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85801sec preferred_lft 85801sec inet6 fe80::f816:3eff:feff:9778/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:41:0d:b7:df brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-27901) 04/25/24 _x86_64_ (8 CPU) 14:16:18 LINUX RESTART (8 CPU) 14:17:02 tps rtps wtps bread/s bwrtn/s 14:18:01 133.89 70.65 63.25 4604.34 39736.59 14:19:01 79.32 1.77 77.55 84.65 25312.85 14:20:01 83.10 23.18 59.92 2813.80 22807.80 14:21:01 115.58 0.43 115.15 55.19 63386.90 14:22:01 113.26 0.08 113.18 5.73 72517.11 14:23:01 337.23 11.95 325.28 764.54 40584.70 14:24:01 18.28 0.07 18.21 3.07 18722.86 14:25:01 22.55 0.05 22.50 10.53 19371.70 14:26:01 73.07 1.87 71.20 111.45 18701.10 Average: 108.43 12.12 96.31 932.66 35675.11 14:17:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:18:01 30188040 31652308 2751180 8.35 56528 1726700 1502836 4.42 919264 1557960 76556 14:19:01 30088552 31741596 2850668 8.65 75672 1882224 1376552 4.05 828984 1718592 79596 14:20:01 29740796 31715364 3198424 9.71 89688 2178036 1398044 4.11 905680 1962764 140644 14:21:01 27239376 31668108 5699844 17.30 128748 4485816 1428960 4.20 1009364 4223584 1007492 14:22:01 26056288 31668596 6882932 20.90 139396 5610104 1499360 4.41 1018812 5346424 374800 14:23:01 23856752 29632144 9082468 27.57 155700 5736540 8899232 26.18 3230800 5253548 1412 14:24:01 23874620 29651092 9064600 27.52 155912 5737104 8802172 25.90 3215064 5250800 224 14:25:01 23903936 29706548 9035284 27.43 156320 5765236 8051032 23.69 3174964 5265396 232 14:26:01 26049040 31668364 6890180 20.92 158124 5597468 1503704 4.42 1247004 5109548 1880 Average: 26777489 31011569 6161731 18.71 124010 4302136 3829099 11.27 1727771 3965402 186982 14:17:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:18:01 ens3 387.08 271.73 1464.46 62.31 0.00 0.00 0.00 0.00 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:18:01 lo 1.49 1.49 0.16 0.16 0.00 0.00 0.00 0.00 14:19:01 ens3 18.90 15.25 245.15 3.56 0.00 0.00 0.00 0.00 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:19:01 lo 0.93 0.93 0.10 0.10 0.00 0.00 0.00 0.00 14:20:01 ens3 57.52 46.68 684.05 7.42 0.00 0.00 0.00 0.00 14:20:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:20:01 lo 4.07 4.07 0.40 0.40 0.00 0.00 0.00 0.00 14:21:01 ens3 767.91 356.84 16981.76 25.16 0.00 0.00 0.00 0.00 14:21:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:21:01 lo 5.20 5.20 0.52 0.52 0.00 0.00 0.00 0.00 14:22:01 ens3 393.45 188.84 12395.49 13.72 0.00 0.00 0.00 0.00 14:22:01 br-3695e8c45fd8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:22:01 lo 4.07 4.07 0.39 0.39 0.00 0.00 0.00 0.00 14:23:01 ens3 4.87 3.40 1.27 1.17 0.00 0.00 0.00 0.00 14:23:01 br-3695e8c45fd8 0.87 0.75 0.07 0.31 0.00 0.00 0.00 0.00 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:23:01 vethe349ec6 24.43 22.61 10.54 16.10 0.00 0.00 0.00 0.00 14:24:01 ens3 4.15 3.15 0.84 0.78 0.00 0.00 0.00 0.00 14:24:01 br-3695e8c45fd8 1.98 2.35 1.81 1.76 0.00 0.00 0.00 0.00 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:24:01 vethe349ec6 21.91 17.70 6.81 23.82 0.00 0.00 0.00 0.00 14:25:01 ens3 13.90 14.01 5.68 16.57 0.00 0.00 0.00 0.00 14:25:01 br-3695e8c45fd8 1.47 1.68 0.11 0.15 0.00 0.00 0.00 0.00 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:25:01 vethe349ec6 0.43 0.50 0.59 0.03 0.00 0.00 0.00 0.00 14:26:01 ens3 64.51 40.04 70.78 17.16 0.00 0.00 0.00 0.00 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:01 lo 35.13 35.13 6.23 6.23 0.00 0.00 0.00 0.00 Average: ens3 189.90 104.14 3542.56 16.35 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 3.53 3.53 0.66 0.66 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-27901) 04/25/24 _x86_64_ (8 CPU) 14:16:18 LINUX RESTART (8 CPU) 14:17:02 CPU %user %nice %system %iowait %steal %idle 14:18:01 all 8.53 0.00 0.99 6.50 0.04 83.94 14:18:01 0 5.10 0.00 0.73 0.49 0.02 93.66 14:18:01 1 4.39 0.00 0.56 0.58 0.05 94.42 14:18:01 2 4.97 0.00 0.73 31.20 0.03 63.06 14:18:01 3 7.82 0.00 1.00 1.80 0.03 89.34 14:18:01 4 3.04 0.00 0.85 11.82 0.05 84.24 14:18:01 5 2.51 0.00 0.96 0.48 0.03 96.03 14:18:01 6 18.74 0.00 1.53 4.26 0.05 75.42 14:18:01 7 21.71 0.00 1.57 1.40 0.05 75.27 14:19:01 all 6.66 0.00 0.43 6.70 0.03 86.18 14:19:01 0 5.98 0.00 0.33 0.62 0.00 93.07 14:19:01 1 7.28 0.00 0.40 3.77 0.02 88.54 14:19:01 2 8.79 0.00 0.79 25.97 0.08 64.37 14:19:01 3 10.29 0.00 0.37 2.54 0.02 86.79 14:19:01 4 14.32 0.00 1.02 1.82 0.02 82.82 14:19:01 5 0.17 0.00 0.02 0.00 0.05 99.77 14:19:01 6 1.00 0.00 0.23 15.77 0.02 82.97 14:19:01 7 5.44 0.00 0.32 3.29 0.02 90.93 14:20:01 all 6.38 0.00 0.65 7.06 0.03 85.89 14:20:01 0 9.72 0.00 1.04 0.77 0.03 88.43 14:20:01 1 0.87 0.00 0.25 10.23 0.05 88.60 14:20:01 2 23.77 0.00 1.29 12.64 0.05 62.25 14:20:01 3 7.66 0.00 0.83 1.27 0.02 90.22 14:20:01 4 2.82 0.00 0.45 0.37 0.02 96.35 14:20:01 5 2.08 0.00 0.47 25.03 0.02 72.41 14:20:01 6 1.99 0.00 0.50 4.98 0.03 92.50 14:20:01 7 2.43 0.00 0.37 1.24 0.02 95.95 14:21:01 all 8.17 0.00 3.61 11.90 0.05 76.27 14:21:01 0 7.79 0.00 4.04 0.03 0.05 88.08 14:21:01 1 8.34 0.00 4.10 12.82 0.05 74.69 14:21:01 2 8.16 0.00 3.27 32.82 0.05 55.69 14:21:01 3 9.48 0.00 3.93 2.52 0.03 84.04 14:21:01 4 7.30 0.00 2.63 1.05 0.03 88.99 14:21:01 5 7.89 0.00 3.08 1.48 0.07 87.48 14:21:01 6 8.91 0.00 3.09 0.19 0.05 87.76 14:21:01 7 7.49 0.00 4.72 44.26 0.08 43.44 14:22:01 all 4.51 0.00 1.99 11.36 0.05 82.10 14:22:01 0 4.16 0.00 2.49 1.29 0.12 91.95 14:22:01 1 4.91 0.00 1.74 7.66 0.03 85.66 14:22:01 2 4.44 0.00 2.13 17.29 0.03 76.11 14:22:01 3 4.77 0.00 2.18 1.04 0.05 91.96 14:22:01 4 4.75 0.00 1.68 0.08 0.05 93.44 14:22:01 5 2.74 0.00 1.81 0.44 0.02 94.99 14:22:01 6 5.16 0.00 1.74 0.55 0.03 92.51 14:22:01 7 5.16 0.00 2.14 62.77 0.05 29.88 14:23:01 all 24.95 0.00 3.56 8.67 0.09 62.74 14:23:01 0 28.64 0.00 3.79 8.13 0.10 59.34 14:23:01 1 25.78 0.00 3.54 23.95 0.08 46.64 14:23:01 2 30.17 0.00 4.14 7.21 0.08 58.40 14:23:01 3 24.05 0.00 3.63 6.05 0.07 66.20 14:23:01 4 27.73 0.00 3.61 4.96 0.08 63.61 14:23:01 5 16.98 0.00 2.42 11.90 0.10 68.61 14:23:01 6 22.77 0.00 3.65 2.18 0.08 71.32 14:23:01 7 23.58 0.00 3.67 4.95 0.10 67.69 14:24:01 all 6.64 0.00 0.61 1.20 0.06 91.49 14:24:01 0 5.73 0.00 0.60 0.00 0.03 93.64 14:24:01 1 6.79 0.00 0.52 0.00 0.03 92.66 14:24:01 2 6.50 0.00 0.65 0.05 0.05 92.75 14:24:01 3 7.87 0.00 0.63 0.00 0.07 91.43 14:24:01 4 6.37 0.00 0.60 9.42 0.07 83.54 14:24:01 5 5.89 0.00 0.43 0.08 0.07 93.54 14:24:01 6 6.93 0.00 0.68 0.02 0.05 92.32 14:24:01 7 7.05 0.00 0.79 0.02 0.08 92.07 14:25:01 all 1.51 0.00 0.30 1.52 0.06 96.61 14:25:01 0 1.85 0.00 0.30 0.00 0.05 97.80 14:25:01 1 0.82 0.00 0.32 0.60 0.05 98.22 14:25:01 2 0.52 0.00 0.22 0.02 0.03 99.22 14:25:01 3 1.44 0.00 0.32 0.12 0.07 98.06 14:25:01 4 1.10 0.00 0.30 11.20 0.05 87.34 14:25:01 5 3.12 0.00 0.21 0.10 0.05 96.51 14:25:01 6 1.09 0.00 0.35 0.00 0.05 98.51 14:25:01 7 2.14 0.00 0.40 0.13 0.10 97.22 14:26:01 all 5.98 0.00 0.55 1.95 0.03 91.49 14:26:01 0 3.42 0.00 0.45 0.17 0.02 95.94 14:26:01 1 2.17 0.00 0.47 0.47 0.02 96.88 14:26:01 2 3.69 0.00 0.62 0.27 0.03 95.39 14:26:01 3 7.94 0.00 0.55 0.23 0.03 91.24 14:26:01 4 0.65 0.00 0.45 12.58 0.07 86.25 14:26:01 5 15.72 0.00 0.77 0.60 0.05 82.86 14:26:01 6 1.08 0.00 0.45 0.17 0.02 98.28 14:26:01 7 13.21 0.00 0.65 1.12 0.03 84.98 Average: all 8.14 0.00 1.41 6.30 0.05 84.10 Average: 0 8.04 0.00 1.53 1.28 0.05 89.11 Average: 1 6.80 0.00 1.32 6.66 0.04 85.19 Average: 2 10.09 0.00 1.54 14.10 0.05 74.23 Average: 3 9.02 0.00 1.49 1.72 0.04 87.73 Average: 4 7.56 0.00 1.29 5.92 0.05 85.19 Average: 5 6.36 0.00 1.13 4.46 0.05 88.00 Average: 6 7.48 0.00 1.35 3.13 0.04 87.99 Average: 7 9.77 0.00 1.62 13.22 0.06 75.33