Started by upstream project "policy-pap-master-merge-java" build number 350 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137752 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-26122 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-dhttlsfCYS4a/agent.2078 SSH_AGENT_PID=2080 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9210417928353453994.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9210417928353453994.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1083621391599179200.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-saub lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-saub/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.91 botocore==1.34.91 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins8786289027505557310.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins7012746460804729830.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.Pg6Bn3f7f8 ++ echo ROBOT_VENV=/tmp/tmp.Pg6Bn3f7f8 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.Pg6Bn3f7f8 ++ source /tmp/tmp.Pg6Bn3f7f8/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.Pg6Bn3f7f8 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.Pg6Bn3f7f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.Pg6Bn3f7f8) ' '!=' x ']' +++ PS1='(tmp.Pg6Bn3f7f8) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.Pg6Bn3f7f8/src/onap ++ rm -rf /tmp/tmp.Pg6Bn3f7f8/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.Pg6Bn3f7f8/bin/activate + '[' -z /tmp/tmp.Pg6Bn3f7f8/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.Pg6Bn3f7f8/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.Pg6Bn3f7f8 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.Pg6Bn3f7f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.Pg6Bn3f7f8) ' ++ '[' 'x(tmp.Pg6Bn3f7f8) ' '!=' x ']' ++ PS1='(tmp.Pg6Bn3f7f8) (tmp.Pg6Bn3f7f8) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.IfKGrR3aFZ + cd /tmp/tmp.IfKGrR3aFZ + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:a3b0738a5c3612fb51928bf2c6d20b8feb39bdb05a9ed3daffb9977a144bacf6 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:a268743829cd0409cbb5d4678d69b9f5d14d1499e307454e509124b67f361bc4 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating zookeeper ... Creating prometheus ... Creating mariadb ... Creating simulator ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating zookeeper ... done Creating kafka ... Creating prometheus ... done Creating grafana ... Creating kafka ... done Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating simulator ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done Creating grafana ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 11 seconds policy-pap Up 13 seconds grafana Up 10 seconds kafka Up 15 seconds policy-api Up 14 seconds policy-db-migrator Up 18 seconds simulator Up 12 seconds mariadb Up 19 seconds prometheus Up 16 seconds zookeeper Up 17 seconds NAMES STATUS policy-apex-pdp Up 16 seconds policy-pap Up 18 seconds grafana Up 15 seconds kafka Up 20 seconds policy-api Up 19 seconds simulator Up 17 seconds mariadb Up 24 seconds prometheus Up 21 seconds zookeeper Up 22 seconds NAMES STATUS policy-apex-pdp Up 21 seconds policy-pap Up 23 seconds grafana Up 20 seconds kafka Up 25 seconds policy-api Up 24 seconds simulator Up 22 seconds mariadb Up 29 seconds prometheus Up 26 seconds zookeeper Up 27 seconds NAMES STATUS policy-apex-pdp Up 26 seconds policy-pap Up 28 seconds grafana Up 25 seconds kafka Up 30 seconds policy-api Up 29 seconds simulator Up 27 seconds mariadb Up 34 seconds prometheus Up 31 seconds zookeeper Up 32 seconds NAMES STATUS policy-apex-pdp Up 31 seconds policy-pap Up 33 seconds grafana Up 30 seconds kafka Up 35 seconds policy-api Up 34 seconds simulator Up 32 seconds mariadb Up 39 seconds prometheus Up 36 seconds zookeeper Up 37 seconds NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 38 seconds grafana Up 35 seconds kafka Up 40 seconds policy-api Up 39 seconds simulator Up 37 seconds mariadb Up 44 seconds prometheus Up 41 seconds zookeeper Up 42 seconds NAMES STATUS policy-apex-pdp Up 41 seconds policy-pap Up 43 seconds grafana Up 40 seconds kafka Up 45 seconds policy-api Up 44 seconds simulator Up 42 seconds mariadb Up 49 seconds prometheus Up 46 seconds zookeeper Up 47 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 12:40:03 up 7 min, 0 users, load average: 4.06, 2.34, 1.04 Tasks: 204 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 8.6 us, 1.7 sy, 0.0 ni, 81.1 id, 8.5 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 42 seconds policy-pap Up 44 seconds grafana Up 41 seconds kafka Up 46 seconds policy-api Up 45 seconds simulator Up 43 seconds mariadb Up 50 seconds prometheus Up 47 seconds zookeeper Up 48 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9f10aca361cb policy-apex-pdp 10.13% 185.8MiB / 31.41GiB 0.58% 19.8kB / 23.9kB 0B / 0B 47 43cc711f1c86 policy-pap 14.55% 572.5MiB / 31.41GiB 1.78% 41.6kB / 46.5kB 0B / 149MB 60 a99a722b04f1 grafana 0.03% 57.54MiB / 31.41GiB 0.18% 18.2kB / 3.38kB 0B / 25MB 16 7b59f0a67712 kafka 37.00% 363.8MiB / 31.41GiB 1.13% 115kB / 118kB 0B / 127kB 83 9e79fdfe83bd policy-api 0.20% 496.1MiB / 31.41GiB 1.54% 990kB / 647kB 0B / 0B 54 7e5cd61d6e18 simulator 0.08% 120MiB / 31.41GiB 0.37% 1.23kB / 0B 0B / 0B 76 89c2db910687 mariadb 0.02% 102MiB / 31.41GiB 0.32% 935kB / 1.18MB 11.1MB / 68.1MB 37 9ef6b7b16ac4 prometheus 0.00% 18.91MiB / 31.41GiB 0.06% 1.6kB / 474B 0B / 0B 13 070cd7ecdd50 zookeeper 20.10% 101MiB / 31.41GiB 0.31% 81.2kB / 67.2kB 0B / 336kB 60 + echo + cd /tmp/tmp.IfKGrR3aFZ + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | pdpTypeC != pdpTypeA ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.IfKGrR3aFZ/output.xml Log: /tmp/tmp.IfKGrR3aFZ/log.html Report: /tmp/tmp.IfKGrR3aFZ/report.html + RESULT=1 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 12:41:56 up 9 min, 0 users, load average: 1.30, 1.97, 1.07 Tasks: 202 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 7.8 us, 1.5 sy, 0.0 ni, 83.0 id, 7.6 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9f10aca361cb policy-apex-pdp 0.35% 189.9MiB / 31.41GiB 0.59% 124kB / 187kB 0B / 0B 52 43cc711f1c86 policy-pap 0.58% 505MiB / 31.41GiB 1.57% 2.55MB / 1.16MB 0B / 149MB 66 a99a722b04f1 grafana 0.04% 57.96MiB / 31.41GiB 0.18% 19.1kB / 4.45kB 0B / 25MB 16 7b59f0a67712 kafka 1.24% 394.1MiB / 31.41GiB 1.23% 683kB / 595kB 0B / 602kB 85 9e79fdfe83bd policy-api 0.14% 496.4MiB / 31.41GiB 1.54% 2.46MB / 1.1MB 0B / 0B 56 7e5cd61d6e18 simulator 0.07% 120.1MiB / 31.41GiB 0.37% 1.45kB / 0B 0B / 0B 78 89c2db910687 mariadb 0.02% 103.3MiB / 31.41GiB 0.32% 2.02MB / 4.88MB 11.1MB / 68.4MB 28 9ef6b7b16ac4 prometheus 0.00% 25.91MiB / 31.41GiB 0.08% 191kB / 10.9kB 0B / 0B 13 070cd7ecdd50 zookeeper 0.07% 103MiB / 31.41GiB 0.32% 297kB / 285kB 0B / 336kB 60 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, simulator, mariadb, prometheus, zookeeper grafana | logger=settings t=2024-04-25T12:39:22.988052892Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T12:39:22Z grafana | logger=settings t=2024-04-25T12:39:22.988350016Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-25T12:39:22.988364636Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-25T12:39:22.988368426Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-25T12:39:22.988371606Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-25T12:39:22.988374686Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T12:39:22.988379337Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T12:39:22.988406227Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-25T12:39:22.988413467Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-25T12:39:22.988416787Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-25T12:39:22.988424427Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T12:39:22.988427957Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T12:39:22.988431377Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-25T12:39:22.988439767Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-25T12:39:22.988443047Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-25T12:39:22.988446277Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-25T12:39:22.988449657Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-25T12:39:22.988452777Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-25T12:39:22.988456058Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-25T12:39:22.988844333Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-25T12:39:22.988868013Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-25T12:39:22.989770036Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-25T12:39:22.991212625Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-25T12:39:22.992318341Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.106036ms grafana | logger=migrator t=2024-04-25T12:39:22.99973868Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-25T12:39:23.000887547Z level=info msg="Migration successfully executed" id="create user table" duration=1.151237ms grafana | logger=migrator t=2024-04-25T12:39:23.008952723Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-25T12:39:23.010209782Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.258019ms grafana | logger=migrator t=2024-04-25T12:39:23.018264238Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-25T12:39:23.019006799Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=742.361µs grafana | logger=migrator t=2024-04-25T12:39:23.025802446Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-25T12:39:23.026953484Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.150948ms grafana | logger=migrator t=2024-04-25T12:39:23.031186909Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-25T12:39:23.032292037Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.105048ms grafana | logger=migrator t=2024-04-25T12:39:23.090860302Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-25T12:39:23.09459299Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.721178ms grafana | logger=migrator t=2024-04-25T12:39:23.101584519Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-25T12:39:23.102452293Z level=info msg="Migration successfully executed" id="create user table v2" duration=867.284µs grafana | logger=migrator t=2024-04-25T12:39:23.108909534Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-25T12:39:23.110068403Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.158529ms grafana | logger=migrator t=2024-04-25T12:39:23.115336714Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-25T12:39:23.116669946Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.334462ms grafana | logger=migrator t=2024-04-25T12:39:23.12460363Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-25T12:39:23.125306411Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=702.141µs grafana | logger=migrator t=2024-04-25T12:39:23.130382101Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-25T12:39:23.131220184Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=837.683µs grafana | logger=migrator t=2024-04-25T12:39:23.137847498Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-25T12:39:23.139643346Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.794988ms grafana | logger=migrator t=2024-04-25T12:39:23.143641509Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-25T12:39:23.1436898Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.131µs grafana | logger=migrator t=2024-04-25T12:39:23.148636117Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-25T12:39:23.149717994Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.086897ms grafana | logger=migrator t=2024-04-25T12:39:23.16090304Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-25T12:39:23.161200584Z level=info msg="Migration successfully executed" id="Add missing user data" duration=297.754µs grafana | logger=migrator t=2024-04-25T12:39:23.166444837Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-04-25T12:39:23.167656235Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.211048ms grafana | logger=migrator t=2024-04-25T12:39:23.174701086Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-25T12:39:23.17561081Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=911.944µs grafana | logger=migrator t=2024-04-25T12:39:23.183309331Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-25T12:39:23.184557Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.247519ms grafana | logger=migrator t=2024-04-25T12:39:23.189952405Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-25T12:39:23.201461585Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.51089ms grafana | logger=migrator t=2024-04-25T12:39:23.204797598Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-25T12:39:23.205673272Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=876.284µs grafana | logger=migrator t=2024-04-25T12:39:23.208802081Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-25T12:39:23.209047505Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=245.044µs grafana | logger=migrator t=2024-04-25T12:39:23.216114875Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-25T12:39:23.217391706Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.285121ms grafana | logger=migrator t=2024-04-25T12:39:23.224706791Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-25T12:39:23.225073486Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=366.135µs grafana | logger=migrator t=2024-04-25T12:39:23.228428429Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-25T12:39:23.229289412Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=860.303µs grafana | logger=migrator t=2024-04-25T12:39:23.234847769Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-25T12:39:23.236093049Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.24596ms grafana | logger=migrator t=2024-04-25T12:39:23.240935664Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-25T12:39:23.242079203Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.143729ms grafana | logger=migrator t=2024-04-25T12:39:23.246494122Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-25T12:39:23.248340101Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.845049ms grafana | logger=migrator t=2024-04-25T12:39:23.258960698Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-25T12:39:23.25978497Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=827.042µs grafana | logger=migrator t=2024-04-25T12:39:23.266257212Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-25T12:39:23.266311012Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=53.7µs grafana | logger=migrator t=2024-04-25T12:39:23.272297527Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-25T12:39:23.273172661Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=876.474µs grafana | logger=migrator t=2024-04-25T12:39:23.280455984Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-25T12:39:23.281764195Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.305031ms grafana | logger=migrator t=2024-04-25T12:39:23.286190814Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-25T12:39:23.286946707Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=755.693µs grafana | logger=migrator t=2024-04-25T12:39:23.293774343Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-25T12:39:23.294943982Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.169519ms grafana | logger=migrator t=2024-04-25T12:39:23.300887726Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T12:39:23.30434058Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.451835ms grafana | logger=migrator t=2024-04-25T12:39:23.310078959Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-25T12:39:23.310972643Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=893.524µs grafana | logger=migrator t=2024-04-25T12:39:23.318145776Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-25T12:39:23.319607288Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.461522ms grafana | logger=migrator t=2024-04-25T12:39:23.34200067Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-25T12:39:23.343231629Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.232009ms grafana | logger=migrator t=2024-04-25T12:39:23.41338304Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-25T12:39:23.414620979Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.237499ms grafana | logger=migrator t=2024-04-25T12:39:23.432343807Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-25T12:39:23.433642927Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.2921ms grafana | logger=migrator t=2024-04-25T12:39:23.442244012Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-25T12:39:23.442635568Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=391.906µs grafana | logger=migrator t=2024-04-25T12:39:23.452114527Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-25T12:39:23.452731787Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=620.4µs grafana | logger=migrator t=2024-04-25T12:39:23.460310086Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-04-25T12:39:23.460676902Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=372.976µs grafana | logger=migrator t=2024-04-25T12:39:23.4733358Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-25T12:39:23.474167904Z level=info msg="Migration successfully executed" id="create star table" duration=832.664µs grafana | logger=migrator t=2024-04-25T12:39:23.480851738Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-25T12:39:23.482071257Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.219049ms grafana | logger=migrator t=2024-04-25T12:39:23.485817326Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-25T12:39:23.487010414Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.192158ms grafana | logger=migrator t=2024-04-25T12:39:23.492077474Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-25T12:39:23.493340363Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.262979ms grafana | logger=migrator t=2024-04-25T12:39:23.498784469Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-25T12:39:23.500172191Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.386412ms grafana | logger=migrator t=2024-04-25T12:39:23.504198264Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-25T12:39:23.504899696Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=701.202µs grafana | logger=migrator t=2024-04-25T12:39:23.510893189Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-25T12:39:23.512031707Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.131098ms grafana | logger=migrator t=2024-04-25T12:39:23.517550434Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-25T12:39:23.518344316Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=791.312µs grafana | logger=migrator t=2024-04-25T12:39:23.523314344Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-25T12:39:23.523340724Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.14µs grafana | logger=migrator t=2024-04-25T12:39:23.526849709Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-25T12:39:23.526879399Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.68µs grafana | logger=migrator t=2024-04-25T12:39:23.531066176Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-25T12:39:23.53134299Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=277.214µs grafana | logger=migrator t=2024-04-25T12:39:23.539213003Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-25T12:39:23.540549674Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.336481ms grafana | logger=migrator t=2024-04-25T12:39:23.544806991Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-25T12:39:23.545574153Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=766.922µs grafana | logger=migrator t=2024-04-25T12:39:23.549401113Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-25T12:39:23.550313248Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=912.555µs grafana | logger=migrator t=2024-04-25T12:39:23.555728593Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-25T12:39:23.556395213Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=666.79µs grafana | logger=migrator t=2024-04-25T12:39:23.562003261Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-25T12:39:23.562801483Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=795.652µs grafana | logger=migrator t=2024-04-25T12:39:23.567203533Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-25T12:39:23.568422021Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.218628ms grafana | logger=migrator t=2024-04-25T12:39:23.573421Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-25T12:39:23.578333897Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.912487ms grafana | logger=migrator t=2024-04-25T12:39:23.584714367Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-25T12:39:23.585484409Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=767.092µs grafana | logger=migrator t=2024-04-25T12:39:23.599850764Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-25T12:39:23.601040823Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.26079ms grafana | logger=migrator t=2024-04-25T12:39:23.61039867Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-25T12:39:23.611708691Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.309571ms grafana | logger=migrator t=2024-04-25T12:39:23.646066779Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-25T12:39:23.64673933Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=674.181µs grafana | logger=migrator t=2024-04-25T12:39:23.656829368Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-04-25T12:39:23.657648741Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=823.693µs grafana | logger=migrator t=2024-04-25T12:39:23.666839895Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-04-25T12:39:23.666954797Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=116.132µs grafana | logger=migrator t=2024-04-25T12:39:23.671456267Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T12:39:23.673351928Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.895331ms grafana | logger=migrator t=2024-04-25T12:39:23.67738056Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T12:39:23.679080377Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.699567ms grafana | logger=migrator t=2024-04-25T12:39:23.682658703Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.684560303Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.90133ms grafana | logger=migrator t=2024-04-25T12:39:23.691303289Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.692178262Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=874.533µs grafana | logger=migrator t=2024-04-25T12:39:23.696914277Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.698842387Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.92815ms grafana | logger=migrator t=2024-04-25T12:39:23.705019354Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.705806376Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=786.792µs grafana | logger=migrator t=2024-04-25T12:39:23.712856127Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-04-25T12:39:23.714453742Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.597285ms grafana | logger=migrator t=2024-04-25T12:39:23.719231947Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-04-25T12:39:23.719258197Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.95µs grafana | logger=migrator t=2024-04-25T12:39:23.723216859Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-04-25T12:39:23.72324727Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.041µs grafana | logger=migrator t=2024-04-25T12:39:23.729208444Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.731471549Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.265924ms grafana | logger=migrator t=2024-04-25T12:39:23.734979924Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.736876413Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.896489ms grafana | logger=migrator t=2024-04-25T12:39:23.745377387Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.747615612Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.237765ms grafana | logger=migrator t=2024-04-25T12:39:23.757201703Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.760430493Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.22793ms grafana | logger=migrator t=2024-04-25T12:39:23.766023931Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-04-25T12:39:23.766339016Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=327.035µs grafana | logger=migrator t=2024-04-25T12:39:24.04256325Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-04-25T12:39:24.043344581Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=783.752µs grafana | logger=migrator t=2024-04-25T12:39:24.054748581Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-04-25T12:39:24.055333439Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=584.938µs grafana | logger=migrator t=2024-04-25T12:39:24.065157789Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-04-25T12:39:24.06521006Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=52.971µs grafana | logger=migrator t=2024-04-25T12:39:24.069904642Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-04-25T12:39:24.070496419Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=591.517µs grafana | logger=migrator t=2024-04-25T12:39:24.078502505Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-04-25T12:39:24.079094304Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=591.529µs grafana | logger=migrator t=2024-04-25T12:39:24.088375196Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T12:39:24.092302718Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.927412ms grafana | logger=migrator t=2024-04-25T12:39:24.104354098Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-04-25T12:39:24.104936226Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=579.647µs grafana | logger=migrator t=2024-04-25T12:39:24.119452197Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-04-25T12:39:24.120081956Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=629.929µs grafana | logger=migrator t=2024-04-25T12:39:24.128129112Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-04-25T12:39:24.128814101Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=684.789µs grafana | logger=migrator t=2024-04-25T12:39:24.13326599Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-04-25T12:39:24.133534144Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=267.964µs grafana | logger=migrator t=2024-04-25T12:39:24.139073747Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-04-25T12:39:24.139682635Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=608.528µs grafana | logger=migrator t=2024-04-25T12:39:24.149313843Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-04-25T12:39:24.15286129Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.548577ms grafana | logger=migrator t=2024-04-25T12:39:24.222129816Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-04-25T12:39:24.223587165Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.485589ms kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-25 12:39:21,869] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,869] INFO Client environment:host.name=7b59f0a67712 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,869] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,873] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:21,877] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-25 12:39:21,882] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-25 12:39:21,890] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 12:39:21,908] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 12:39:21,909] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 12:39:21,916] INFO Socket connection established, initiating session, client: /172.17.0.8:52748, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 12:39:21,988] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000609db0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 12:39:22,112] INFO Session: 0x100000609db0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:22,112] INFO EventThread shut down for session: 0x100000609db0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-04-25 12:39:22,884] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-25 12:39:23,182] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-25 12:39:23,246] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-25 12:39:23,247] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-25 12:39:23,247] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-25 12:39:23,259] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 12:39:23,262] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,262] INFO Client environment:host.name=7b59f0a67712 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-25T12:39:24.233196823Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-04-25T12:39:24.233504307Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=307.274µs grafana | logger=migrator t=2024-04-25T12:39:24.244820036Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-04-25T12:39:24.245327683Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=507.337µs grafana | logger=migrator t=2024-04-25T12:39:24.25721235Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-04-25T12:39:24.258360896Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.147376ms grafana | logger=migrator t=2024-04-25T12:39:24.268815093Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-04-25T12:39:24.27078979Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.979977ms grafana | logger=migrator t=2024-04-25T12:39:24.277556389Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-04-25T12:39:24.27831871Z level=info msg="Migration successfully executed" id="create data_source table" duration=763.511µs grafana | logger=migrator t=2024-04-25T12:39:24.285764318Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-04-25T12:39:24.287736344Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.970726ms grafana | logger=migrator t=2024-04-25T12:39:24.297738457Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-04-25T12:39:24.298864021Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.124924ms grafana | logger=migrator t=2024-04-25T12:39:24.305783602Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-04-25T12:39:24.306547483Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=758.611µs grafana | logger=migrator t=2024-04-25T12:39:24.312822646Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-04-25T12:39:24.313546045Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=723.659µs grafana | logger=migrator t=2024-04-25T12:39:24.322332261Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-04-25T12:39:24.327138435Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.808124ms grafana | logger=migrator t=2024-04-25T12:39:24.337586644Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-04-25T12:39:24.338586197Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=999.463µs grafana | logger=migrator t=2024-04-25T12:39:24.346186027Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-04-25T12:39:24.34717347Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=987.683µs grafana | logger=migrator t=2024-04-25T12:39:24.352743774Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-04-25T12:39:24.353594136Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=850.272µs grafana | logger=migrator t=2024-04-25T12:39:24.359562805Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-04-25T12:39:24.360605678Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.042373ms grafana | logger=migrator t=2024-04-25T12:39:24.366163082Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-04-25T12:39:24.368579854Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.416472ms grafana | logger=migrator t=2024-04-25T12:39:24.383673903Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-04-25T12:39:24.387668617Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.010964ms grafana | logger=migrator t=2024-04-25T12:39:24.394097561Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-04-25T12:39:24.394125242Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.081µs grafana | logger=migrator t=2024-04-25T12:39:24.396711876Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-04-25T12:39:24.396914249Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=202.433µs grafana | logger=migrator t=2024-04-25T12:39:24.402829197Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-04-25T12:39:24.407174215Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.344518ms grafana | logger=migrator t=2024-04-25T12:39:24.41214922Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-04-25T12:39:24.412375194Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=226.474µs grafana | logger=migrator t=2024-04-25T12:39:24.510300348Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-04-25T12:39:24.510624014Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=327.726µs grafana | logger=migrator t=2024-04-25T12:39:24.519671123Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-04-25T12:39:24.522153516Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.487483ms grafana | logger=migrator t=2024-04-25T12:39:24.529009606Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-04-25T12:39:24.529251159Z level=info msg="Migration successfully executed" id="Update uid value" duration=243.753µs grafana | logger=migrator t=2024-04-25T12:39:24.535027576Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-04-25T12:39:24.535995399Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=968.483µs grafana | logger=migrator t=2024-04-25T12:39:24.539606657Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-04-25T12:39:24.540438687Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=831.91µs grafana | logger=migrator t=2024-04-25T12:39:24.549349065Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-04-25T12:39:24.55040587Z level=info msg="Migration successfully executed" id="create api_key table" duration=2.027257ms grafana | logger=migrator t=2024-04-25T12:39:24.55650033Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-04-25T12:39:24.557609874Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.109654ms kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-04-25 12:39:13 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-25 12:39:13 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-25 12:39:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Waiting for server startup kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-04-25T12:39:24.561084901Z level=info msg="Executing migration" id="add index api_key.key" mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-25T12:39:24.561647898Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=562.657µs policy-db-migrator | Waiting for mariadb port 3306... mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-api | Waiting for mariadb port 3306... kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | mariadb (172.17.0.2:3306) open grafana | logger=migrator t=2024-04-25T12:39:24.568358587Z level=info msg="Executing migration" id="add index api_key.account_id_name" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Number of transaction pools: 1 prometheus | ts=2024-04-25T12:39:16.719Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d zookeeper | ===> User policy-pap | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | Waiting for kafka port 9092... grafana | logger=migrator t=2024-04-25T12:39:24.569695305Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.336158ms simulator | overriding logback.xml policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions prometheus | ts=2024-04-25T12:39:16.719Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) policy-pap | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | kafka (172.17.0.8:9092) open grafana | logger=migrator t=2024-04-25T12:39:24.573591566Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" simulator | 2024-04-25 12:39:21,243 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" zookeeper | ===> Configuring ... policy-pap | Waiting for kafka port 9092... policy-api | policy-db-migrator (172.17.0.6:6824) open kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | Waiting for pap port 6969... grafana | logger=migrator t=2024-04-25T12:39:24.574394327Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=801.8µs simulator | 2024-04-25 12:39:21,311 INFO org.onap.policy.models.simulators starting policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" zookeeper | ===> Running preflight checks ... policy-pap | kafka (172.17.0.8:9092) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | pap (172.17.0.10:6969) open grafana | logger=migrator t=2024-04-25T12:39:24.578179407Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" simulator | 2024-04-25 12:39:21,311 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... policy-pap | Waiting for api port 6969... policy-api | kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' grafana | logger=migrator t=2024-04-25T12:39:24.578918517Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=746.56µs simulator | 2024-04-25 12:39:21,486 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... policy-pap | api (172.17.0.7:6969) open policy-api | . ____ _ __ _ _ kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | [2024-04-25T12:40:02.440+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] grafana | logger=migrator t=2024-04-25T12:39:24.58822048Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" simulator | 2024-04-25 12:39:21,487 INFO org.onap.policy.models.simulators starting A&AI simulator policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Completed initialization of buffer pool prometheus | ts=2024-04-25T12:39:16.828Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 zookeeper | ===> Launching ... policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ kafka | [2024-04-25 12:39:23,265] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | [2024-04-25T12:40:02.600+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-25T12:39:24.58893511Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=720.811µs simulator | 2024-04-25 12:39:21,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) prometheus | ts=2024-04-25T12:39:16.829Z caller=main.go:1129 level=info msg="Starting TSDB ..." zookeeper | ===> Launching zookeeper ... policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ kafka | [2024-04-25 12:39:23,268] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) policy-apex-pdp | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-25T12:39:24.598096551Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" simulator | 2024-04-25 12:39:21,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: 128 rollback segments are active. prometheus | ts=2024-04-25T12:39:16.831Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 zookeeper | [2024-04-25 12:39:19,444] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) kafka | [2024-04-25 12:39:23,273] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) policy-apex-pdp | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T12:39:24.606804156Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.708605ms simulator | 2024-04-25 12:39:21,628 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | 321 blocks mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... prometheus | ts=2024-04-25T12:39:16.831Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 zookeeper | [2024-04-25 12:39:19,453] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | . ____ _ __ _ _ policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / kafka | [2024-04-25 12:39:23,274] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T12:39:24.613353812Z level=info msg="Executing migration" id="create api_key table v2" simulator | 2024-04-25 12:39:21,636 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-db-migrator | Preparing upgrade release version: 0800 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-api | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | auto.offset.reset = latest grafana | logger=migrator t=2024-04-25T12:39:24.61395152Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=594.558µs simulator | 2024-04-25 12:39:21,710 INFO Session workerName=node0 policy-db-migrator | Preparing upgrade release version: 0900 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: log sequence number 46590; transaction id 14 prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.29µs prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" kafka | [2024-04-25 12:39:23,277] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) policy-api | :: Spring Boot :: (v3.1.10) grafana | logger=migrator t=2024-04-25T12:39:24.618178286Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" simulator | 2024-04-25 12:39:22,246 INFO Using GSON for REST calls policy-db-migrator | Preparing upgrade release version: 1000 mariadb | 2024-04-25 12:39:17 0 [Note] Plugin 'FEEDBACK' is disabled. zookeeper | [2024-04-25 12:39:19,453] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-25T12:39:16.838Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 kafka | [2024-04-25 12:39:23,284] INFO Socket connection established, initiating session, client: /172.17.0.8:52750, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-api | simulator | 2024-04-25 12:39:22,312 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} policy-db-migrator | Preparing upgrade release version: 1100 mariadb | 2024-04-25 12:39:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. zookeeper | [2024-04-25 12:39:19,453] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-25T12:39:16.838Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=1.257651ms wal_replay_duration=1.061358ms wbl_replay_duration=370ns total_replay_duration=2.36672ms kafka | [2024-04-25 12:39:23,296] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000609db0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-25T12:39:24.619987671Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.809304ms policy-apex-pdp | check.crcs = true policy-api | [2024-04-25T12:39:35.837+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final simulator | 2024-04-25 12:39:22,318 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-db-migrator | Preparing upgrade release version: 1200 mariadb | 2024-04-25 12:39:17 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. zookeeper | [2024-04-25 12:39:19,453] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC kafka | [2024-04-25 12:39:23,302] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) grafana | logger=migrator t=2024-04-25T12:39:24.625798047Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-api | [2024-04-25T12:39:35.901+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 26 (/app/api.jar started by policy in /opt/app/policy/api/bin) simulator | 2024-04-25 12:39:22,324 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1585ms policy-db-migrator | Preparing upgrade release version: 1300 mariadb | 2024-04-25 12:39:17 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. zookeeper | [2024-04-25 12:39:19,455] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1153 level=info msg="TSDB started" kafka | [2024-04-25 12:39:23,653] INFO Cluster ID = 6HLElDkITkKpDhaqvETosg (kafka.server.KafkaServer) grafana | logger=migrator t=2024-04-25T12:39:24.626914252Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.116305ms policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-apex-pdp | client.id = consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-1 policy-api | [2024-04-25T12:39:35.902+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" simulator | 2024-04-25 12:39:22,325 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4304 ms. policy-db-migrator | Done mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd: ready for connections. zookeeper | [2024-04-25 12:39:19,455] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml kafka | [2024-04-25 12:39:23,658] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) grafana | logger=migrator t=2024-04-25T12:39:24.637892857Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-api | [2024-04-25T12:39:37.851+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. simulator | 2024-04-25 12:39:22,332 INFO org.onap.policy.models.simulators starting SDNC simulator policy-db-migrator | name version mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution zookeeper | [2024-04-25 12:39:19,455] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-25T12:39:16.844Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.514026ms db_storage=1.94µs remote_storage=2.73µs web_handler=920ns query_engine=1.5µs scrape=490.548µs scrape_sd=210.573µs notify=35.211µs notify_sd=23.31µs rules=3.1µs tracing=6.611µs kafka | [2024-04-25 12:39:23,709] INFO KafkaConfig values: grafana | logger=migrator t=2024-04-25T12:39:24.638743578Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=852.751µs policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-api | [2024-04-25T12:39:37.936+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 76 ms. Found 6 JPA repository interfaces. simulator | 2024-04-25 12:39:22,335 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | policyadmin 0 mariadb | 2024-04-25 12:39:18+00:00 [Note] [Entrypoint]: Temporary server started. zookeeper | [2024-04-25 12:39:19,455] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) prometheus | ts=2024-04-25T12:39:16.844Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 grafana | logger=migrator t=2024-04-25T12:39:24.644391143Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-api | [2024-04-25T12:39:38.356+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler simulator | 2024-04-25 12:39:22,336 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 12:39:19,459] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) prometheus | ts=2024-04-25T12:39:16.844Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." kafka | alter.config.policy.class.name = null grafana | logger=migrator t=2024-04-25T12:39:24.644729507Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=339.414µs policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-api | [2024-04-25T12:39:38.359+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: Creating user policy_user simulator | 2024-04-25 12:39:22,336 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-25 12:39:19,459] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | alter.log.dirs.replication.quota.window.num = 11 grafana | logger=migrator t=2024-04-25T12:39:24.649746674Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-apex-pdp | group.id = 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 policy-apex-pdp | group.instance.id = null policy-api | [2024-04-25T12:39:38.981+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-db-migrator | upgrade: 0 -> 1300 mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) simulator | 2024-04-25 12:39:22,337 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-25 12:39:19,460] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-04-25 12:39:19,460] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) grafana | logger=migrator t=2024-04-25T12:39:24.650295001Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=547.127µs policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-api | [2024-04-25T12:39:38.991+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-db-migrator | mariadb | simulator | 2024-04-25 12:39:22,342 INFO Session workerName=node0 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 zookeeper | [2024-04-25 12:39:19,460] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) grafana | logger=migrator t=2024-04-25T12:39:24.656484473Z level=info msg="Executing migration" id="Update api_key table charset" policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-api | [2024-04-25T12:39:38.993+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql mariadb | simulator | 2024-04-25 12:39:22,402 INFO Using GSON for REST calls kafka | authorizer.class.name = zookeeper | [2024-04-25 12:39:19,460] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) grafana | logger=migrator t=2024-04-25T12:39:24.656509644Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.741µs policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-api | [2024-04-25T12:39:38.993+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-db-migrator | -------------- mariadb | 2024-04-25 12:39:20+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf simulator | 2024-04-25 12:39:22,412 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} kafka | auto.create.topics.enable = true zookeeper | [2024-04-25 12:39:19,460] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) grafana | logger=migrator t=2024-04-25T12:39:24.660072941Z level=info msg="Executing migration" id="Add expires to api_key table" policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-api | [2024-04-25T12:39:39.088+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh simulator | 2024-04-25 12:39:22,414 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} kafka | auto.include.jmx.reporter = true zookeeper | [2024-04-25 12:39:19,472] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) grafana | logger=migrator t=2024-04-25T12:39:24.662814797Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.741036ms policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-api | [2024-04-25T12:39:39.089+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3117 ms policy-db-migrator | -------------- mariadb | #!/bin/bash -xv simulator | 2024-04-25 12:39:22,415 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1675ms kafka | auto.leader.rebalance.enable = true zookeeper | [2024-04-25 12:39:19,475] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) grafana | logger=migrator t=2024-04-25T12:39:24.670990295Z level=info msg="Executing migration" id="Add service account foreign key" policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-apex-pdp | metric.reporters = [] policy-api | [2024-04-25T12:39:39.509+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-db-migrator | mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved simulator | 2024-04-25 12:39:22,415 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. kafka | background.threads = 10 zookeeper | [2024-04-25 12:39:19,475] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) grafana | logger=migrator t=2024-04-25T12:39:24.673059683Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.070638ms policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-apex-pdp | metrics.num.samples = 2 policy-api | [2024-04-25T12:39:39.583+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-db-migrator | mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. simulator | 2024-04-25 12:39:22,416 INFO org.onap.policy.models.simulators starting SO simulator kafka | broker.heartbeat.interval.ms = 2000 zookeeper | [2024-04-25 12:39:19,477] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) grafana | logger=migrator t=2024-04-25T12:39:24.679272625Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-pap | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | metrics.recording.level = INFO policy-api | [2024-04-25T12:39:39.644+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql mariadb | # simulator | 2024-04-25 12:39:22,418 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | broker.id = 1 zookeeper | [2024-04-25 12:39:19,488] INFO (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.679423917Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=151.632µs policy-pap | :: Spring Boot :: (v3.1.10) policy-apex-pdp | metrics.sample.window.ms = 30000 policy-api | [2024-04-25T12:39:39.933+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-db-migrator | -------------- mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); simulator | 2024-04-25 12:39:22,418 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | broker.id.generation.enable = true zookeeper | [2024-04-25 12:39:19,488] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.682443407Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-pap | policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-api | [2024-04-25T12:39:39.965+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) mariadb | # you may not use this file except in compliance with the License. simulator | 2024-04-25 12:39:22,419 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | broker.rack = null zookeeper | [2024-04-25 12:39:19,488] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.686738523Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.295096ms policy-pap | [2024-04-25T12:39:52.207+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-apex-pdp | receive.buffer.bytes = 65536 policy-api | [2024-04-25T12:39:40.065+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb policy-db-migrator | -------------- mariadb | # You may obtain a copy of the License at simulator | 2024-04-25 12:39:22,420 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 kafka | broker.session.timeout.ms = 9000 zookeeper | [2024-04-25 12:39:19,488] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.690592175Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-pap | [2024-04-25T12:39:52.275+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 40 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-api | [2024-04-25T12:39:40.066+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-db-migrator | mariadb | # simulator | 2024-04-25 12:39:22,448 INFO Session workerName=node0 kafka | client.quota.callback.class = null zookeeper | [2024-04-25 12:39:19,488] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.693097168Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.504933ms policy-pap | [2024-04-25T12:39:52.276+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | reconnect.backoff.ms = 50 policy-api | [2024-04-25T12:39:42.060+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-db-migrator | mariadb | # http://www.apache.org/licenses/LICENSE-2.0 simulator | 2024-04-25 12:39:22,509 INFO Using GSON for REST calls kafka | compression.type = producer zookeeper | [2024-04-25 12:39:19,488] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.697528876Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-pap | [2024-04-25T12:39:54.236+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | request.timeout.ms = 30000 policy-api | [2024-04-25T12:39:42.063+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql mariadb | # simulator | 2024-04-25 12:39:22,521 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} kafka | connection.failed.authentication.delay.ms = 100 zookeeper | [2024-04-25 12:39:19,488] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.698246756Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=717.77µs policy-pap | [2024-04-25T12:39:54.323+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. policy-apex-pdp | retry.backoff.ms = 100 policy-api | [2024-04-25T12:39:43.211+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-db-migrator | -------------- mariadb | # Unless required by applicable law or agreed to in writing, software simulator | 2024-04-25 12:39:22,526 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} kafka | connections.max.idle.ms = 600000 zookeeper | [2024-04-25 12:39:19,488] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.706751049Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-pap | [2024-04-25T12:39:54.727+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | sasl.client.callback.handler.class = null policy-api | [2024-04-25T12:39:44.821+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) mariadb | # distributed under the License is distributed on an "AS IS" BASIS, simulator | 2024-04-25 12:39:22,526 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1787ms kafka | connections.max.reauth.ms = 0 zookeeper | [2024-04-25 12:39:19,488] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.707728091Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=976.832µs policy-pap | [2024-04-25T12:39:54.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | sasl.jaas.config = null policy-api | [2024-04-25T12:39:46.004+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | -------------- mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. simulator | 2024-04-25 12:39:22,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4893 ms. kafka | control.plane.listener.name = null zookeeper | [2024-04-25 12:39:19,488] INFO (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.716549738Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-pap | [2024-04-25T12:39:55.322+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-api | [2024-04-25T12:39:46.235+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@134c329a, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1c277413, org.springframework.security.web.context.SecurityContextHolderFilter@3033e54c, org.springframework.security.web.header.HeaderWriterFilter@7908e69e, org.springframework.security.web.authentication.logout.LogoutFilter@635ad140, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@463bdee9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@10e5c13c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@796ed904, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1e9b1d9f, org.springframework.security.web.access.ExceptionTranslationFilter@6ef0a044, org.springframework.security.web.access.intercept.AuthorizationFilter@631c244c] policy-db-migrator | mariadb | # See the License for the specific language governing permissions and simulator | 2024-04-25 12:39:22,527 INFO org.onap.policy.models.simulators starting VFC simulator kafka | controlled.shutdown.enable = true zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.717784795Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.234817ms policy-pap | [2024-04-25T12:39:55.332+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-api | [2024-04-25T12:39:47.163+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | mariadb | # limitations under the License. simulator | 2024-04-25 12:39:22,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | controlled.shutdown.max.retries = 3 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:host.name=070cd7ecdd50 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.725493716Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-pap | [2024-04-25T12:39:55.335+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | sasl.kerberos.service.name = null policy-api | [2024-04-25T12:39:47.275+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql mariadb | simulator | 2024-04-25 12:39:22,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | controlled.shutdown.retry.backoff.ms = 5000 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.726755523Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.261247ms policy-pap | [2024-04-25T12:39:55.335+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-api | [2024-04-25T12:39:47.294+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-db-migrator | -------------- mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-25 12:39:22,531 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | controller.listener.names = null zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.734258012Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-pap | [2024-04-25T12:39:55.427+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-api | [2024-04-25T12:39:47.312+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.187 seconds (process running for 12.808) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) mariadb | do simulator | 2024-04-25 12:39:22,532 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 kafka | controller.quorum.append.linger.ms = 25 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.735658551Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.395599ms policy-pap | [2024-04-25T12:39:55.427+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3071 ms policy-apex-pdp | sasl.login.callback.handler.class = null policy-api | [2024-04-25T12:40:06.944+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | -------------- mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" simulator | 2024-04-25 12:39:22,537 INFO Session workerName=node0 kafka | controller.quorum.election.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T12:39:24.827475686Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-pap | [2024-04-25T12:39:55.844+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.class = null policy-api | [2024-04-25T12:40:06.944+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" simulator | 2024-04-25 12:39:22,597 INFO Using GSON for REST calls kafka | controller.quorum.election.timeout.ms = 1000 grafana | logger=migrator t=2024-04-25T12:39:24.828433688Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=959.842µs policy-pap | [2024-04-25T12:39:55.903+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-api | [2024-04-25T12:40:06.946+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-db-migrator | mariadb | done simulator | 2024-04-25 12:39:22,608 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} kafka | controller.quorum.fetch.timeout.ms = 2000 grafana | logger=migrator t=2024-04-25T12:39:24.838505102Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-pap | [2024-04-25T12:39:56.235+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.read.timeout.ms = null policy-api | [2024-04-25T12:40:07.290+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-25 12:39:22,612 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} kafka | controller.quorum.request.timeout.ms = 2000 grafana | logger=migrator t=2024-04-25T12:39:24.838613643Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=109.671µs policy-pap | [2024-04-25T12:39:56.329+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-api | [] policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' simulator | 2024-04-25 12:39:22,613 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1873ms kafka | controller.quorum.retry.backoff.ms = 20 grafana | logger=migrator t=2024-04-25T12:39:24.843785802Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-pap | [2024-04-25T12:39:56.331+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | controller.quorum.voters = [] simulator | 2024-04-25 12:39:22,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4917 ms. grafana | logger=migrator t=2024-04-25T12:39:24.843822742Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=38.33µs policy-pap | [2024-04-25T12:39:56.362+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | controller.quota.window.num = 11 simulator | 2024-04-25 12:39:22,613 INFO org.onap.policy.models.simulators started policy-pap | [2024-04-25T12:39:57.789+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.847805565Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' kafka | controller.quota.window.size.seconds = 1 policy-pap | [2024-04-25T12:39:57.798+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.852290214Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.476259ms policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.855634538Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | [2024-04-25T12:39:58.245+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository kafka | default.replication.factor = 1 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.858427345Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.792097ms policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' policy-pap | [2024-04-25T12:39:58.642+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository kafka | delegation.token.expiry.check.interval.ms = 3600000 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.864932221Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | [2024-04-25T12:39:58.770+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | delegation.token.expiry.time.ms = 86400000 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.864994022Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=62.391µs policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | [2024-04-25T12:39:59.037+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | delegation.token.master.key = null zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.8716972Z level=info msg="Executing migration" id="create quota table v1" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' policy-pap | allow.auto.create.topics = true kafka | delegation.token.max.lifetime.ms = 604800000 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.872804645Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.110195ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | auto.commit.interval.ms = 5000 kafka | delegation.token.secret.key = null zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.879842228Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | auto.include.jmx.reporter = true kafka | delete.records.purgatory.purge.interval.requests = 1 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.881128556Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.294328ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' policy-pap | auto.offset.reset = latest kafka | delete.topic.enable = true zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.89202823Z level=info msg="Executing migration" id="Update quota table charset" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | bootstrap.servers = [kafka:9092] kafka | early.start.listeners = null zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:24.89206922Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.59µs policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | check.crcs = true kafka | fetch.max.bytes = 57671680 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:25.029255745Z level=info msg="Executing migration" id="create plugin_setting table" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' policy-pap | client.dns.lookup = use_all_dns_ips kafka | fetch.purgatory.purge.interval.requests = 1000 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:25.030622593Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.372628ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | client.id = consumer-53d3b957-3026-4843-bc4f-55d426241089-1 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] zookeeper | [2024-04-25 12:39:19,492] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) grafana | logger=migrator t=2024-04-25T12:39:25.03575068Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql mariadb | policy-pap | client.rack = kafka | group.consumer.heartbeat.interval.ms = 5000 zookeeper | [2024-04-25 12:39:19,492] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:25.036610101Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=859.321µs policy-apex-pdp | security.providers = null policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" zookeeper | [2024-04-25 12:39:19,493] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-25T12:39:25.041365355Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-pap | default.api.timeout.ms = 60000 kafka | group.consumer.max.session.timeout.ms = 60000 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' zookeeper | [2024-04-25 12:39:19,493] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) grafana | logger=migrator t=2024-04-25T12:39:25.046062357Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.695532ms policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | -------------- policy-pap | enable.auto.commit = true kafka | group.consumer.max.size = 2147483647 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql zookeeper | [2024-04-25 12:39:19,493] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-pap | exclude.internal.topics = true kafka | group.consumer.min.heartbeat.interval.ms = 5000 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp grafana | logger=migrator t=2024-04-25T12:39:25.054004052Z level=info msg="Executing migration" id="Update plugin_setting table charset" zookeeper | [2024-04-25 12:39:19,494] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 kafka | group.consumer.min.session.timeout.ms = 45000 mariadb | grafana | logger=migrator t=2024-04-25T12:39:25.054025033Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.541µs policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql kafka | group.consumer.session.timeout.ms = 45000 mariadb | 2024-04-25 12:39:21+00:00 [Note] [Entrypoint]: Stopping temporary server grafana | logger=migrator t=2024-04-25T12:39:25.059661597Z level=info msg="Executing migration" id="create session table" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- kafka | group.coordinator.new.enable = false mariadb | 2024-04-25 12:39:21 0 [Note] mariadbd (initiated by: unknown): Normal shutdown grafana | logger=migrator t=2024-04-25T12:39:25.060969774Z level=info msg="Migration successfully executed" id="create session table" duration=1.308537ms policy-apex-pdp | ssl.cipher.suites = null zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | group.coordinator.threads = 1 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: FTS optimize thread exiting. grafana | logger=migrator t=2024-04-25T12:39:25.067085794Z level=info msg="Executing migration" id="Drop old table playlist table" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | fetch.min.bytes = 1 policy-db-migrator | -------------- zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | group.initial.rebalance.delay.ms = 3000 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Starting shutdown... grafana | logger=migrator t=2024-04-25T12:39:25.067209816Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=124.752µs policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | group.id = 53d3b957-3026-4843-bc4f-55d426241089 policy-db-migrator | zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | group.max.session.timeout.ms = 1800000 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool grafana | logger=migrator t=2024-04-25T12:39:25.078438604Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-apex-pdp | ssl.engine.factory.class = null policy-pap | group.instance.id = null policy-db-migrator | zookeeper | [2024-04-25 12:39:19,498] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.max.size = 2147483647 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 12:39:21 grafana | logger=migrator t=2024-04-25T12:39:25.078561026Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=124.812µs policy-apex-pdp | ssl.key.password = null policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql zookeeper | [2024-04-25 12:39:19,498] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.min.session.timeout.ms = 6000 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" grafana | logger=migrator t=2024-04-25T12:39:25.084269681Z level=info msg="Executing migration" id="create playlist table v2" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | interceptor.classes = [] policy-db-migrator | -------------- zookeeper | [2024-04-25 12:39:19,498] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) kafka | initial.broker.registration.timeout.ms = 60000 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Shutdown completed; log sequence number 381915; transaction id 298 grafana | logger=migrator t=2024-04-25T12:39:25.085374077Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.104156ms policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | internal.leave.group.on.close = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) zookeeper | [2024-04-25 12:39:19,499] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) kafka | inter.broker.listener.name = PLAINTEXT mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: Shutdown complete grafana | logger=migrator t=2024-04-25T12:39:25.089850755Z level=info msg="Executing migration" id="create playlist item table v2" policy-apex-pdp | ssl.keystore.key = null policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | -------------- zookeeper | [2024-04-25 12:39:19,499] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) kafka | inter.broker.protocol.version = 3.6-IV2 mariadb | grafana | logger=migrator t=2024-04-25T12:39:25.09094708Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.096215ms policy-apex-pdp | ssl.keystore.location = null policy-pap | isolation.level = read_uncommitted policy-db-migrator | zookeeper | [2024-04-25 12:39:19,521] INFO Logging initialized @590ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) kafka | kafka.metrics.polling.interval.secs = 10 mariadb | 2024-04-25 12:39:22+00:00 [Note] [Entrypoint]: Temporary server stopped grafana | logger=migrator t=2024-04-25T12:39:25.106289032Z level=info msg="Executing migration" id="Update playlist table charset" policy-apex-pdp | ssl.keystore.password = null policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | zookeeper | [2024-04-25 12:39:19,621] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) kafka | kafka.metrics.reporters = [] mariadb | grafana | logger=migrator t=2024-04-25T12:39:25.106493965Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=207.933µs policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql kafka | leader.imbalance.check.interval.seconds = 300 mariadb | 2024-04-25 12:39:22+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. grafana | logger=migrator t=2024-04-25T12:39:25.122891082Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | max.partition.fetch.bytes = 1048576 zookeeper | [2024-04-25 12:39:19,622] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) kafka | leader.imbalance.per.broker.percentage = 10 mariadb | grafana | logger=migrator t=2024-04-25T12:39:25.123013334Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=125.182µs policy-apex-pdp | ssl.provider = null policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | -------------- zookeeper | [2024-04-25 12:39:19,645] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... grafana | logger=migrator t=2024-04-25T12:39:25.127354601Z level=info msg="Executing migration" id="Add playlist column created_at" policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 zookeeper | [2024-04-25 12:39:19,675] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 grafana | logger=migrator t=2024-04-25T12:39:25.130680345Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.326524ms policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 zookeeper | [2024-04-25 12:39:19,675] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) kafka | log.cleaner.backoff.ms = 15000 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Number of transaction pools: 1 grafana | logger=migrator t=2024-04-25T12:39:25.135096763Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | metrics.recording.level = INFO zookeeper | [2024-04-25 12:39:19,676] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) kafka | log.cleaner.dedupe.buffer.size = 134217728 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions grafana | logger=migrator t=2024-04-25T12:39:25.138426428Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.329255ms policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 zookeeper | [2024-04-25 12:39:19,680] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) kafka | log.cleaner.delete.retention.ms = 86400000 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) grafana | logger=migrator t=2024-04-25T12:39:25.144807692Z level=info msg="Executing migration" id="drop preferences table v2" policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] zookeeper | [2024-04-25 12:39:19,688] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) kafka | log.cleaner.enable = true mariadb | 2024-04-25 12:39:22 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) grafana | logger=migrator t=2024-04-25T12:39:25.144977744Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=169.772µs policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 zookeeper | [2024-04-25 12:39:19,702] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) kafka | log.cleaner.io.buffer.load.factor = 0.9 mariadb | 2024-04-25 12:39:22 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF grafana | logger=migrator t=2024-04-25T12:39:25.15453194Z level=info msg="Executing migration" id="drop preferences table v3" policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-pap | reconnect.backoff.max.ms = 1000 zookeeper | [2024-04-25 12:39:19,703] INFO Started @772ms (org.eclipse.jetty.server.Server) kafka | log.cleaner.io.buffer.size = 524288 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB grafana | logger=migrator t=2024-04-25T12:39:25.154784044Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=252.834µs policy-apex-pdp | policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 zookeeper | [2024-04-25 12:39:19,703] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Completed initialization of buffer pool grafana | logger=migrator t=2024-04-25T12:39:25.15904Z level=info msg="Executing migration" id="create preferences table v3" policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-pap | request.timeout.ms = 30000 zookeeper | [2024-04-25 12:39:19,708] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) grafana | logger=migrator t=2024-04-25T12:39:25.159941081Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=901.161µs policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 zookeeper | [2024-04-25 12:39:19,709] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) kafka | log.cleaner.min.cleanable.ratio = 0.5 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: 128 rollback segments are active. grafana | logger=migrator t=2024-04-25T12:39:25.163486259Z level=info msg="Executing migration" id="Update preferences table charset" policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802755 policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null zookeeper | [2024-04-25 12:39:19,711] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | log.cleaner.min.compaction.lag.ms = 0 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... grafana | logger=migrator t=2024-04-25T12:39:25.16355201Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=66.051µs policy-apex-pdp | [2024-04-25T12:40:02.758+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-1, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | policy-pap | sasl.jaas.config = null zookeeper | [2024-04-25 12:39:19,713] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | log.cleaner.threads = 1 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. grafana | logger=migrator t=2024-04-25T12:39:25.169811082Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-apex-pdp | [2024-04-25T12:40:02.769+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit zookeeper | [2024-04-25 12:39:19,727] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | log.cleanup.policy = [delete] mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: log sequence number 381915; transaction id 299 grafana | logger=migrator t=2024-04-25T12:39:25.175083812Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.27232ms policy-apex-pdp | [2024-04-25T12:40:02.769+00:00|INFO|ServiceManager|main] service manager starting topics policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 zookeeper | [2024-04-25 12:39:19,727] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | log.dir = /tmp/kafka-logs mariadb | 2024-04-25 12:39:22 0 [Note] Plugin 'FEEDBACK' is disabled. grafana | logger=migrator t=2024-04-25T12:39:25.183838508Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-apex-pdp | [2024-04-25T12:40:02.770+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.kerberos.service.name = null zookeeper | [2024-04-25 12:39:19,729] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) kafka | log.dirs = /var/lib/kafka/data mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool grafana | logger=migrator t=2024-04-25T12:39:25.184092541Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=253.393µs policy-apex-pdp | [2024-04-25T12:40:02.789+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 zookeeper | [2024-04-25 12:39:19,729] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) kafka | log.flush.interval.messages = 9223372036854775807 mariadb | 2024-04-25 12:39:22 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. grafana | logger=migrator t=2024-04-25T12:39:25.194551769Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-apex-pdp | allow.auto.create.topics = true policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 zookeeper | [2024-04-25 12:39:19,733] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) kafka | log.flush.interval.ms = null mariadb | 2024-04-25 12:39:22 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. grafana | logger=migrator t=2024-04-25T12:39:25.19986359Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.311061ms policy-apex-pdp | auto.commit.interval.ms = 5000 policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null zookeeper | [2024-04-25 12:39:19,733] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | log.flush.offset.checkpoint.interval.ms = 60000 mariadb | 2024-04-25 12:39:22 0 [Note] Server socket created on IP: '0.0.0.0'. grafana | logger=migrator t=2024-04-25T12:39:25.203631089Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-pap | sasl.login.class = null zookeeper | [2024-04-25 12:39:19,736] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) kafka | log.flush.scheduler.interval.ms = 9223372036854775807 mariadb | 2024-04-25 12:39:22 0 [Note] Server socket created on IP: '::'. policy-apex-pdp | auto.offset.reset = latest grafana | logger=migrator t=2024-04-25T12:39:25.206973343Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.341204ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.login.connect.timeout.ms = null zookeeper | [2024-04-25 12:39:19,737] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: ready for connections. policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T12:39:25.216383938Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | log.index.interval.bytes = 4096 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-apex-pdp | check.crcs = true grafana | logger=migrator t=2024-04-25T12:39:25.216574571Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=189.913µs policy-db-migrator | policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | log.index.size.max.bytes = 10485760 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 12:39:22 policy-apex-pdp | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T12:39:25.221472885Z level=info msg="Executing migration" id="Add preferences index org_id" zookeeper | [2024-04-25 12:39:19,737] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | log.local.retention.bytes = -2 mariadb | 2024-04-25 12:39:22 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-apex-pdp | client.id = consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2 grafana | logger=migrator t=2024-04-25T12:39:25.223332819Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.858815ms zookeeper | [2024-04-25 12:39:19,745] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | log.local.retention.ms = -2 mariadb | 2024-04-25 12:39:22 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-04-25T12:39:25.229032705Z level=info msg="Executing migration" id="Add preferences index user_id" zookeeper | [2024-04-25 12:39:19,745] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | log.message.downconversion.enable = true mariadb | 2024-04-25 12:39:22 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) grafana | logger=migrator t=2024-04-25T12:39:25.230019528Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=986.293µs zookeeper | [2024-04-25 12:39:19,758] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) policy-apex-pdp | connections.max.idle.ms = 540000 policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | log.message.format.version = 3.0-IV1 mariadb | 2024-04-25 12:39:22 14 [Warning] Aborted connection 14 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) grafana | logger=migrator t=2024-04-25T12:39:25.237086862Z level=info msg="Executing migration" id="create alert table v1" zookeeper | [2024-04-25 12:39:19,759] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-apex-pdp | default.api.timeout.ms = 60000 policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | log.message.timestamp.after.max.ms = 9223372036854775807 grafana | logger=migrator t=2024-04-25T12:39:25.238957117Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.869416ms zookeeper | [2024-04-25 12:39:21,930] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-apex-pdp | enable.auto.commit = true policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 grafana | logger=migrator t=2024-04-25T12:39:25.24528033Z level=info msg="Executing migration" id="add index alert org_id & id " policy-apex-pdp | exclude.internal.topics = true policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T12:39:25.246318263Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.037523ms policy-apex-pdp | fetch.max.bytes = 52428800 policy-db-migrator | -------------- kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T12:39:25.256421987Z level=info msg="Executing migration" id="add index alert state" policy-apex-pdp | fetch.max.wait.ms = 500 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | log.message.timestamp.type = CreateTime policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | fetch.min.bytes = 1 policy-db-migrator | -------------- kafka | log.preallocate = false grafana | logger=migrator t=2024-04-25T12:39:25.257867546Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.445279ms policy-apex-pdp | group.id = 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 policy-db-migrator | kafka | log.retention.bytes = -1 grafana | logger=migrator t=2024-04-25T12:39:25.26197721Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | group.instance.id = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.263262217Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.291247ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | heartbeat.interval.ms = 3000 kafka | log.retention.check.interval.ms = 300000 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | interceptor.classes = [] kafka | log.retention.hours = 168 grafana | logger=migrator t=2024-04-25T12:39:25.272167045Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | internal.leave.group.on.close = true kafka | log.retention.minutes = null grafana | logger=migrator t=2024-04-25T12:39:25.273414442Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.246517ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false kafka | log.retention.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.279298349Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | log.roll.hours = 168 grafana | logger=migrator t=2024-04-25T12:39:25.280953232Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.653983ms policy-db-migrator | policy-apex-pdp | isolation.level = read_uncommitted kafka | log.roll.jitter.hours = 0 grafana | logger=migrator t=2024-04-25T12:39:25.286534325Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | log.roll.jitter.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.288015754Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.481979ms policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | security.providers = null grafana | logger=migrator t=2024-04-25T12:39:25.2929523Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-db-migrator | -------------- policy-apex-pdp | max.partition.fetch.bytes = 1048576 kafka | log.roll.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.303614361Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.661291ms kafka | log.segment.bytes = 1073741824 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-25T12:39:25.307270099Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" kafka | log.segment.delete.delay.ms = 60000 policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 kafka | max.connection.creation.rate = 2147483647 policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 kafka | max.connections = 2147483647 policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 kafka | max.connections.per.ip = 2147483647 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-pap | ssl.cipher.suites = null policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 kafka | max.connections.per.ip.overrides = policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T12:39:25.307842247Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=572.097µs policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | max.incremental.fetch.session.cache.slots = 1000 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T12:39:25.314458014Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-apex-pdp | reconnect.backoff.ms = 50 kafka | message.max.bytes = 1048588 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T12:39:25.316005184Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.54653ms kafka | metadata.log.dir = null policy-apex-pdp | retry.backoff.ms = 100 policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-25T12:39:25.320391163Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T12:39:25.321112142Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=720.059µs kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T12:39:25.324949662Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" kafka | metadata.log.segment.bytes = 1073741824 policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T12:39:25.32557583Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=625.438µs kafka | metadata.log.segment.min.bytes = 8388608 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-pap | ssl.keystore.key = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T12:39:25.332953328Z level=info msg="Executing migration" id="create alert_notification table v1" kafka | metadata.log.segment.ms = 604800000 policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T12:39:25.334374398Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.42077ms kafka | metadata.max.idle.interval.ms = 500 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.keystore.password = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:25.386527316Z level=info msg="Executing migration" id="Add column is_default" policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:25.39288548Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.359514ms kafka | metadata.max.retention.bytes = 104857600 policy-pap | ssl.keystore.type = JKS policy-db-migrator | policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T12:39:25.402957763Z level=info msg="Executing migration" id="Add column frequency" kafka | metadata.max.retention.ms = 604800000 policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-04-25T12:39:25.4095847Z level=info msg="Migration successfully executed" id="Add column frequency" duration=6.625827ms kafka | metric.reporters = [] policy-pap | ssl.provider = null policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.517540108Z level=info msg="Executing migration" id="Add column send_reminder" kafka | metrics.num.samples = 2 policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.524852985Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=7.313617ms kafka | metrics.recording.level = INFO policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T12:39:25.53431821Z level=info msg="Executing migration" id="Add column disable_resolve_message" kafka | metrics.sample.window.ms = 30000 policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T12:39:25.539939164Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.620014ms kafka | min.insync.replicas = 1 policy-pap | ssl.truststore.location = null policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:25.54948165Z level=info msg="Executing migration" id="add index alert_notification org_id & name" kafka | node.id = 1 policy-pap | ssl.truststore.password = null policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:25.551321984Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.838444ms kafka | num.io.threads = 8 policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T12:39:25.558390938Z level=info msg="Executing migration" id="Update alert table charset" kafka | num.network.threads = 3 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:25.55858247Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=192.772µs kafka | num.partitions = 1 policy-pap | policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T12:39:25.568032505Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | num.recovery.threads.per.data.dir = 1 policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.568085546Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=58.141µs kafka | num.replica.alter.log.dirs.threads = null policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.571754374Z level=info msg="Executing migration" id="create notification_journal table v1" kafka | num.replica.fetchers = 1 policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048799206 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.572738378Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=980.714µs kafka | offset.metadata.max.bytes = 4096 policy-pap | [2024-04-25T12:39:59.210+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-1, groupId=53d3b957-3026-4843-bc4f-55d426241089] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql grafana | logger=migrator t=2024-04-25T12:39:25.582972692Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" kafka | offsets.commit.required.acks = -1 policy-pap | [2024-04-25T12:39:59.211+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.584451192Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.47802ms kafka | offsets.commit.timeout.ms = 5000 policy-pap | allow.auto.create.topics = true policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | offsets.load.buffer.size = 5242880 policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.597286831Z level=info msg="Executing migration" id="drop alert_notification_journal" kafka | offsets.retention.check.interval.ms = 600000 policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | offsets.retention.minutes = 10080 grafana | logger=migrator t=2024-04-25T12:39:25.598471278Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.182136ms policy-pap | auto.offset.reset = latest policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | offsets.topic.compression.codec = 0 grafana | logger=migrator t=2024-04-25T12:39:25.605777195Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | offsets.topic.num.partitions = 50 grafana | logger=migrator t=2024-04-25T12:39:25.607344865Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.56678ms policy-pap | check.crcs = true policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | offsets.topic.replication.factor = 1 grafana | logger=migrator t=2024-04-25T12:39:25.615678385Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) kafka | offsets.topic.segment.bytes = 104857600 grafana | logger=migrator t=2024-04-25T12:39:25.617083224Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.404309ms policy-pap | client.id = consumer-policy-pap-2 policy-apex-pdp | security.providers = null policy-db-migrator | -------------- kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding grafana | logger=migrator t=2024-04-25T12:39:25.621210988Z level=info msg="Executing migration" id="Add for to alert table" policy-pap | client.rack = policy-apex-pdp | send.buffer.bytes = 131072 kafka | password.encoder.iterations = 4096 grafana | logger=migrator t=2024-04-25T12:39:25.626068132Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.857894ms policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | kafka | password.encoder.key.length = 128 grafana | logger=migrator t=2024-04-25T12:39:25.634257271Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | password.encoder.keyfactory.algorithm = null grafana | logger=migrator t=2024-04-25T12:39:25.63798223Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.727429ms policy-pap | enable.auto.commit = true policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | password.encoder.old.secret = null grafana | logger=migrator t=2024-04-25T12:39:25.64101988Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-pap | exclude.internal.topics = true policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | password.encoder.secret = null grafana | logger=migrator t=2024-04-25T12:39:25.641201302Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=181.322µs policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder grafana | logger=migrator t=2024-04-25T12:39:25.644726748Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- kafka | process.roles = [] grafana | logger=migrator t=2024-04-25T12:39:25.645802053Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.073795ms policy-pap | fetch.min.bytes = 1 policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | kafka | producer.id.expiration.check.interval.ms = 600000 grafana | logger=migrator t=2024-04-25T12:39:25.649490671Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-pap | group.id = policy-pap policy-apex-pdp | ssl.key.password = null policy-db-migrator | kafka | producer.id.expiration.ms = 86400000 grafana | logger=migrator t=2024-04-25T12:39:25.650584507Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.091955ms policy-pap | group.instance.id = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | producer.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-04-25T12:39:25.661045434Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | queued.max.request.bytes = -1 grafana | logger=migrator t=2024-04-25T12:39:25.667095924Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.05077ms policy-pap | interceptor.classes = [] policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | queued.max.requests = 500 grafana | logger=migrator t=2024-04-25T12:39:25.670998966Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-pap | internal.leave.group.on.close = true policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | -------------- kafka | quota.window.num = 11 grafana | logger=migrator t=2024-04-25T12:39:25.671063737Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=65.211µs policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | kafka | quota.window.size.seconds = 1 grafana | logger=migrator t=2024-04-25T12:39:25.675059659Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 grafana | logger=migrator t=2024-04-25T12:39:25.676134134Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.070775ms policy-pap | isolation.level = read_uncommitted policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | remote.log.manager.task.interval.ms = 30000 grafana | logger=migrator t=2024-04-25T12:39:25.684444414Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | ssl.provider = null policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 grafana | logger=migrator t=2024-04-25T12:39:25.685927163Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.482669ms policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | remote.log.manager.task.retry.backoff.ms = 500 grafana | logger=migrator t=2024-04-25T12:39:25.6909449Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | remote.log.manager.task.retry.jitter = 0.2 grafana | logger=migrator t=2024-04-25T12:39:25.691082171Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=138.281µs policy-pap | max.poll.records = 500 policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | -------------- kafka | remote.log.manager.thread.pool.size = 10 grafana | logger=migrator t=2024-04-25T12:39:25.696538194Z level=info msg="Executing migration" id="create annotation table v5" policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | kafka | remote.log.metadata.custom.metadata.max.bytes = 128 grafana | logger=migrator t=2024-04-25T12:39:25.697427795Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=889.501µs policy-pap | metric.reporters = [] policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager grafana | logger=migrator t=2024-04-25T12:39:25.703521976Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-pap | metrics.num.samples = 2 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql kafka | remote.log.metadata.manager.class.path = null grafana | logger=migrator t=2024-04-25T12:39:25.704958605Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.439359ms policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.708941337Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | metrics.recording.level = INFO kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T12:39:25.710406477Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.45756ms policy-apex-pdp | policy-pap | metrics.sample.window.ms = 30000 kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.718382802Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | remote.log.reader.max.pending.tasks = 100 policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | receive.buffer.bytes = 65536 kafka | remote.log.reader.threads = 10 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.719195893Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=813.051µs policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802798 policy-pap | reconnect.backoff.max.ms = 1000 kafka | remote.log.storage.manager.class.name = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.729372438Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Subscribed to topic(s): policy-pdp-pap policy-pap | reconnect.backoff.ms = 50 kafka | remote.log.storage.manager.class.path = null policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql grafana | logger=migrator t=2024-04-25T12:39:25.730802536Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.429408ms policy-apex-pdp | [2024-04-25T12:40:02.799+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=130d2ddf-3838-4a13-ace3-2e823e62f537, alive=false, publisher=null]]: starting policy-pap | request.timeout.ms = 30000 kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.735004612Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-apex-pdp | [2024-04-25T12:40:02.811+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | retry.backoff.ms = 100 kafka | remote.log.storage.system.enable = false policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T12:39:25.736287929Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.284107ms policy-apex-pdp | acks = -1 policy-pap | sasl.client.callback.handler.class = null kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.741856242Z level=info msg="Executing migration" id="Update annotation table charset" policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | sasl.jaas.config = null kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.741885433Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.621µs policy-apex-pdp | batch.size = 16384 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | replica.fetch.min.bytes = 1 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.746891679Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-04-25T12:39:25.751025693Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.133604ms policy-apex-pdp | buffer.memory = 33554432 policy-pap | sasl.kerberos.service.name = null kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.754588631Z level=info msg="Executing migration" id="Drop category_id index" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T12:39:25.755411212Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=822.541µs policy-apex-pdp | client.id = producer-1 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.761024186Z level=info msg="Executing migration" id="Add column tags to annotation table" kafka | replica.lag.time.max.ms = 30000 policy-apex-pdp | compression.type = none policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.767224027Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.198341ms kafka | replica.selector.class = null policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:25.774658406Z level=info msg="Executing migration" id="Create annotation_tag table v2" kafka | replica.socket.receive.buffer.bytes = 65536 policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-04-25T12:39:25.775345715Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=687.819µs kafka | replica.socket.timeout.ms = 30000 policy-apex-pdp | enable.idempotence = true policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.778629578Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" kafka | replication.quota.window.num = 11 policy-apex-pdp | interceptor.classes = [] policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-04-25T12:39:25.779529941Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=898.273µs kafka | replication.quota.window.size.seconds = 1 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.784467285Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" kafka | request.timeout.ms = 30000 policy-apex-pdp | linger.ms = 0 policy-db-migrator | kafka | reserved.broker.max.id = 1000 policy-apex-pdp | max.block.ms = 60000 policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:25.785266927Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=796.572µs policy-db-migrator | kafka | sasl.client.callback.handler.class = null policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-04-25T12:39:25.7916093Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | sasl.enabled.mechanisms = [GSSAPI] policy-apex-pdp | max.request.size = 1048576 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:25.806821801Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.213731ms policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T12:39:25.810598491Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-apex-pdp | metadata.max.idle.ms = 300000 policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T12:39:25.811119238Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=520.717µs policy-apex-pdp | metric.reporters = [] policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | kafka | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T12:39:25.817718045Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-apex-pdp | metrics.num.samples = 2 policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] grafana | logger=migrator t=2024-04-25T12:39:25.819548579Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.832524ms policy-apex-pdp | metrics.recording.level = INFO policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql kafka | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T12:39:25.825266255Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:25.825724371Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=459.786µs policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:25.829230227Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T12:39:25.829748454Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=520.467µs policy-apex-pdp | partitioner.class = null policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | sasl.login.class = null grafana | logger=migrator t=2024-04-25T12:39:25.835432878Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-apex-pdp | partitioner.ignore.keys = false policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.835741493Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=311.555µs policy-apex-pdp | receive.buffer.bytes = 32768 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql kafka | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T12:39:25.841003352Z level=info msg="Executing migration" id="Add created time to annotation table" policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T12:39:25.845597683Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.593041ms policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | security.protocol = PLAINTEXT policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T12:39:26.067271752Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-apex-pdp | request.timeout.ms = 30000 policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:26.074875613Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=7.606021ms policy-apex-pdp | retries = 2147483647 policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:26.080688249Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-apex-pdp | retry.backoff.ms = 100 policy-pap | session.timeout.ms = 45000 policy-db-migrator | kafka | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:26.081417449Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=729.15µs policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T12:39:26.090588691Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-apex-pdp | sasl.jaas.config = null policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- kafka | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:26.092508236Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.917994ms policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | sasl.mechanism.controller.protocol = GSSAPI grafana | logger=migrator t=2024-04-25T12:39:26.096548499Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | sasl.mechanism.inter.broker.protocol = GSSAPI grafana | logger=migrator t=2024-04-25T12:39:26.097015575Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=466.246µs policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T12:39:26.102528198Z level=info msg="Executing migration" id="Add epoch_end column" policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | ssl.engine.factory.class = null policy-db-migrator | kafka | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T12:39:26.110116059Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.587041ms policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | ssl.key.password = null policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-04-25T12:39:26.115871774Z level=info msg="Executing migration" id="Add index for epoch_end" policy-apex-pdp | sasl.login.class = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T12:39:26.116894538Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.024154ms policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:26.122724475Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | ssl.keystore.location = null policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-25T12:39:26.123020278Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=295.423µs policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-25T12:39:26.127171404Z level=info msg="Executing migration" id="Move region to single row" policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0460-pdppolicystatus.sql kafka | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-25T12:39:26.127749421Z level=info msg="Migration successfully executed" id="Move region to single row" duration=577.117µs policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-25T12:39:26.132975Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | ssl.provider = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | sasl.server.callback.handler.class = null policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | sasl.server.max.receive.size = 524288 grafana | logger=migrator t=2024-04-25T12:39:26.134369528Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.394108ms policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | kafka | security.inter.broker.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-25T12:39:26.140101294Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | security.providers = null grafana | logger=migrator t=2024-04-25T12:39:26.141019527Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=916.083µs policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0470-pdp.sql kafka | server.max.startup.time.ms = 9223372036854775807 grafana | logger=migrator t=2024-04-25T12:39:26.146837284Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- kafka | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T12:39:26.148169701Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.330027ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | ssl.truststore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T12:39:26.155284405Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | socket.listen.backlog.size = 50 grafana | logger=migrator t=2024-04-25T12:39:26.156583972Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.302147ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | policy-db-migrator | kafka | socket.receive.buffer.bytes = 102400 grafana | logger=migrator t=2024-04-25T12:39:26.161328555Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | kafka | socket.request.max.bytes = 104857600 grafana | logger=migrator t=2024-04-25T12:39:26.162007073Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=679.768µs policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-04-25T12:39:26.167315143Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048799216 policy-db-migrator | -------------- kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-04-25T12:39:26.168360547Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=994.343µs policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | [2024-04-25T12:39:59.217+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) kafka | ssl.client.auth = none grafana | logger=migrator t=2024-04-25T12:39:26.175903537Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-pap | [2024-04-25T12:39:59.710+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | -------------- kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T12:39:26.176033179Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=131.392µs policy-apex-pdp | security.providers = null policy-db-migrator | policy-pap | [2024-04-25T12:39:59.856+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning kafka | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T12:39:26.181939316Z level=info msg="Executing migration" id="create test_data table" policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | policy-pap | [2024-04-25T12:40:00.107+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@40db6136, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5ced0537, org.springframework.security.web.context.SecurityContextHolderFilter@50e24ea4, org.springframework.security.web.header.HeaderWriterFilter@3605ab16, org.springframework.security.web.authentication.logout.LogoutFilter@2befb16f, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@78ea700f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@22172b00, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4205d5d0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ee1ddcf, org.springframework.security.web.access.ExceptionTranslationFilter@2e7517aa, org.springframework.security.web.access.intercept.AuthorizationFilter@23d23d98] kafka | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T12:39:26.183255764Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.315418ms policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-pap | [2024-04-25T12:40:00.908+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' kafka | ssl.key.password = null grafana | logger=migrator t=2024-04-25T12:39:26.194614774Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:01.000+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T12:39:26.195480265Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=864.791µs policy-apex-pdp | ssl.cipher.suites = null policy-pap | [2024-04-25T12:40:01.013+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T12:39:26.208839202Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | [2024-04-25T12:40:01.029+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | -------------- kafka | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T12:39:26.210326851Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.495739ms policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | [2024-04-25T12:40:01.029+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | kafka | ssl.keystore.location = null policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T12:39:26.214418106Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-pap | [2024-04-25T12:40:01.030+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | kafka | ssl.keystore.password = null policy-apex-pdp | ssl.key.password = null grafana | logger=migrator t=2024-04-25T12:39:26.21556163Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.143064ms policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | ssl.keystore.type = JKS policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T12:39:26.222060777Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | -------------- kafka | ssl.principal.mapping.rules = DEFAULT policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T12:39:26.222252889Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=190.482µs policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T12:39:26.230079912Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | -------------- kafka | ssl.provider = null policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T12:39:26.230675221Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=595.289µs policy-pap | [2024-04-25T12:40:01.033+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@15a8bbe5 policy-db-migrator | kafka | ssl.secure.random.implementation = null policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T12:39:26.237668502Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-pap | [2024-04-25T12:40:01.046+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | kafka | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T12:39:26.237772944Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.692µs policy-pap | [2024-04-25T12:40:01.047+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | ssl.truststore.certificates = null policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T12:39:26.244454552Z level=info msg="Executing migration" id="create team table" policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- kafka | ssl.truststore.location = null policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-04-25T12:39:26.245773189Z level=info msg="Migration successfully executed" id="create team table" duration=1.316477ms policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | ssl.truststore.password = null policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T12:39:26.249710251Z level=info msg="Executing migration" id="add index team.org_id" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- kafka | ssl.truststore.type = JKS policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T12:39:26.251308873Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.605142ms policy-pap | auto.offset.reset = latest policy-db-migrator | kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T12:39:26.257571915Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | kafka | transaction.max.timeout.ms = 900000 policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T12:39:26.258534288Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=960.003µs policy-pap | check.crcs = true policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | transaction.partition.verification.enable = true policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T12:39:26.269486402Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T12:39:26.278565223Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=9.076811ms policy-pap | client.id = consumer-53d3b957-3026-4843-bc4f-55d426241089-3 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | transaction.state.log.load.buffer.size = 5242880 policy-apex-pdp | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T12:39:26.282406043Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | client.rack = policy-db-migrator | -------------- kafka | transaction.state.log.min.isr = 2 policy-apex-pdp | transactional.id = null grafana | logger=migrator t=2024-04-25T12:39:26.282648066Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=241.673µs policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | transaction.state.log.num.partitions = 50 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T12:39:26.288836238Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | kafka | transaction.state.log.replication.factor = 3 policy-apex-pdp | grafana | logger=migrator t=2024-04-25T12:39:26.289895812Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.059004ms policy-pap | enable.auto.commit = true policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql kafka | transaction.state.log.segment.bytes = 104857600 policy-apex-pdp | [2024-04-25T12:40:02.822+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-25T12:39:26.293877964Z level=info msg="Executing migration" id="create team member table" policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- kafka | transactional.id.expiration.ms = 604800000 policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T12:39:26.295327754Z level=info msg="Migration successfully executed" id="create team member table" duration=1.44868ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | unclean.leader.election.enable = false policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T12:39:26.302117053Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | -------------- kafka | unstable.api.versions.enable = false policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802838 grafana | logger=migrator t=2024-04-25T12:39:26.303133937Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.015983ms policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | kafka | zookeeper.clientCnxnSocket = null policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=130d2ddf-3838-4a13-ace3-2e823e62f537, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-25T12:39:26.350560603Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | fetch.min.bytes = 1 policy-db-migrator | kafka | zookeeper.connect = zookeeper:2181 policy-apex-pdp | [2024-04-25T12:40:02.839+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-04-25T12:39:26.352740662Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.180729ms policy-pap | group.id = 53d3b957-3026-4843-bc4f-55d426241089 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql kafka | zookeeper.connection.timeout.ms = null policy-apex-pdp | [2024-04-25T12:40:02.839+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-04-25T12:39:26.36323211Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | group.instance.id = null policy-db-migrator | -------------- kafka | zookeeper.max.in.flight.requests = 10 policy-apex-pdp | [2024-04-25T12:40:02.840+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-04-25T12:39:26.364251713Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.019053ms policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | zookeeper.metadata.migration.enable = false policy-apex-pdp | [2024-04-25T12:40:02.840+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-04-25T12:39:26.368766274Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | interceptor.classes = [] policy-db-migrator | -------------- kafka | zookeeper.metadata.migration.min.batch.size = 200 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener grafana | logger=migrator t=2024-04-25T12:39:26.377296166Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.530462ms policy-pap | internal.leave.group.on.close = true policy-db-migrator | kafka | zookeeper.session.timeout.ms = 18000 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher grafana | logger=migrator t=2024-04-25T12:39:26.383598709Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | kafka | zookeeper.set.acl = false policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher grafana | logger=migrator t=2024-04-25T12:39:26.388283061Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.680882ms policy-pap | isolation.level = read_uncommitted policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql kafka | zookeeper.ssl.cipher.suites = null policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 grafana | logger=migrator t=2024-04-25T12:39:26.393019434Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | zookeeper.ssl.client.enable = false policy-apex-pdp | [2024-04-25T12:40:02.843+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted grafana | logger=migrator t=2024-04-25T12:39:26.398312923Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.292879ms policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) kafka | zookeeper.ssl.crl.enable = false policy-apex-pdp | [2024-04-25T12:40:02.843+00:00|INFO|ServiceManager|main] service manager starting Create REST server grafana | logger=migrator t=2024-04-25T12:39:26.405891944Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | -------------- kafka | zookeeper.ssl.enabled.protocols = null policy-apex-pdp | [2024-04-25T12:40:02.855+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-25T12:39:26.406885186Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=992.622µs policy-pap | max.poll.records = 500 policy-db-migrator | kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-apex-pdp | [] grafana | logger=migrator t=2024-04-25T12:39:26.412307538Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | zookeeper.ssl.keystore.location = null policy-apex-pdp | [2024-04-25T12:40:02.860+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T12:39:26.414074822Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.765894ms policy-pap | metric.reporters = [] policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql kafka | zookeeper.ssl.keystore.password = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a0278ad7-a33f-4693-8b54-fde3c5ffe2e1","timestampMs":1714048802842,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-25T12:39:26.419596305Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.type = null policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting Rest Server grafana | logger=migrator t=2024-04-25T12:39:26.42146644Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.869745ms policy-pap | metrics.recording.level = INFO policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting policy-pap | metrics.sample.window.ms = 30000 kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T12:39:26.429897681Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-25T12:39:26.431188688Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.292317ms policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-pap | receive.buffer.bytes = 65536 kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T12:39:26.440578412Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | reconnect.backoff.max.ms = 1000 kafka | zookeeper.ssl.truststore.type = null grafana | logger=migrator t=2024-04-25T12:39:26.442096192Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.51605ms policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ServiceManager|main] service manager started policy-pap | reconnect.backoff.ms = 50 kafka | (kafka.server.KafkaConfig) grafana | logger=migrator t=2024-04-25T12:39:26.450827787Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ServiceManager|main] service manager started policy-pap | request.timeout.ms = 30000 kafka | [2024-04-25 12:39:23,738] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-25T12:39:26.452483949Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.654292ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-pap | retry.backoff.ms = 100 kafka | [2024-04-25 12:39:23,738] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-25T12:39:26.456659494Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-04-25T12:40:03.138+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:26.458158234Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.49823ms policy-db-migrator | policy-pap | sasl.jaas.config = null policy-apex-pdp | [2024-04-25T12:40:03.138+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-apex-pdp | [2024-04-25T12:40:03.139+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Cluster ID: 6HLElDkITkKpDhaqvETosg grafana | logger=migrator t=2024-04-25T12:39:26.462629193Z level=info msg="Executing migration" id="add index dashboard_permission" policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:03.139+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6HLElDkITkKpDhaqvETosg policy-apex-pdp | [2024-04-25T12:40:03.140+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 grafana | logger=migrator t=2024-04-25T12:39:26.463641717Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.012014ms policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | [2024-04-25T12:40:03.242+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-apex-pdp | [2024-04-25T12:40:03.257+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:26.470448746Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-04-25T12:40:03.347+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-apex-pdp | [2024-04-25T12:40:03.360+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:26.471272177Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=819.26µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | [2024-04-25T12:40:03.449+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,739] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:26.475516683Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | [2024-04-25T12:40:03.462+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,744] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:26.475995229Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=478.026µs policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | [2024-04-25T12:40:03.552+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,770] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:26.481721065Z level=info msg="Executing migration" id="create tag table" policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | [2024-04-25T12:40:03.571+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,774] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql grafana | logger=migrator t=2024-04-25T12:39:26.482595217Z level=info msg="Migration successfully executed" id="create tag table" duration=873.612µs policy-pap | sasl.login.class = null policy-apex-pdp | [2024-04-25T12:40:03.653+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls kafka | [2024-04-25 12:39:23,782] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:26.493890606Z level=info msg="Executing migration" id="add index tag.key_value" policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-04-25T12:40:03.653+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls kafka | [2024-04-25 12:39:23,784] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T12:39:26.495636959Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.747743ms grafana | logger=migrator t=2024-04-25T12:39:26.671743614Z level=info msg="Executing migration" id="create login attempt table" policy-apex-pdp | [2024-04-25T12:40:03.654+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,785] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T12:39:26.673214884Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.473ms policy-apex-pdp | [2024-04-25T12:40:03.675+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,794] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T12:39:26.683031333Z level=info msg="Executing migration" id="add index login_attempt.username" policy-apex-pdp | [2024-04-25T12:40:03.756+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,836] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-25T12:39:26.683999676Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=966.883µs policy-apex-pdp | [2024-04-25T12:40:03.778+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,865] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:26.690600143Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-apex-pdp | [2024-04-25T12:40:03.859+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,879] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:26.692238314Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.640511ms policy-apex-pdp | [2024-04-25T12:40:03.882+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:39:23,907] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T12:39:26.698749781Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-apex-pdp | [2024-04-25T12:40:03.965+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:39:24,228] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:26.715786606Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.038085ms policy-apex-pdp | [2024-04-25T12:40:03.986+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:39:24,249] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T12:39:26.764210345Z level=info msg="Executing migration" id="create login_attempt v2" policy-apex-pdp | [2024-04-25T12:40:04.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:39:24,249] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T12:39:26.765436971Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.227536ms policy-apex-pdp | [2024-04-25T12:40:04.091+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:39:24,255] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T12:39:26.76912496Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-apex-pdp | [2024-04-25T12:40:04.176+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-25 12:39:24,259] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-04-25T12:39:26.770601059Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.475449ms policy-apex-pdp | [2024-04-25T12:40:04.195+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T12:39:26.776154782Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-apex-pdp | [2024-04-25T12:40:04.280+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,286] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:26.776450556Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=297.554µs policy-apex-pdp | [2024-04-25T12:40:04.299+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:39:24,289] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-25T12:39:26.779760361Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-apex-pdp | [2024-04-25T12:40:04.386+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:39:24,291] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-25T12:39:26.780654562Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=890.961µs policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | [2024-04-25 12:39:24,291] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | [2024-04-25T12:40:04.405+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.79341839Z level=info msg="Executing migration" id="create user auth table" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,292] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | [2024-04-25T12:40:04.491+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.795245095Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.830465ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 12:39:24,307] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | [2024-04-25T12:40:04.511+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.805271338Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,308] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-pap | security.providers = null policy-apex-pdp | [2024-04-25T12:40:04.597+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.806834078Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.56641ms policy-db-migrator | kafka | [2024-04-25 12:39:24,347] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | [2024-04-25T12:40:04.616+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.811215306Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | kafka | [2024-04-25 12:39:24,375] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714048764365,1714048764365,1,0,0,72057619973079041,258,0,27 policy-pap | session.timeout.ms = 45000 policy-apex-pdp | [2024-04-25T12:40:04.702+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.811304657Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=90.041µs policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | (kafka.zk.KafkaZkClient) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | [2024-04-25T12:40:04.720+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.815738105Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,376] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-04-25T12:40:04.806+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.821025495Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.28669ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) kafka | [2024-04-25 12:39:24,428] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-apex-pdp | [2024-04-25T12:40:04.825+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.832922493Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-04-25 12:39:24,435] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | [2024-04-25T12:40:04.910+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.841014839Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.096286ms policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-25 12:39:24,442] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | [2024-04-25T12:40:04.929+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.850552635Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-25 12:39:24,443] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | [2024-04-25T12:40:05.015+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.858171696Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=7.619991ms policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-pap | ssl.engine.factory.class = null kafka | [2024-04-25 12:39:24,456] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | [2024-04-25T12:40:05.034+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.865362771Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-04-25 12:39:24,509] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-apex-pdp | [2024-04-25T12:40:05.119+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.873302856Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=7.918174ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-25 12:39:24,513] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | [2024-04-25T12:40:05.137+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.88117951Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | [2024-04-25T12:40:05.223+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.882037461Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=853.751µs policy-db-migrator | kafka | [2024-04-25 12:39:24,523] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | ssl.keystore.key = null policy-apex-pdp | [2024-04-25T12:40:05.241+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.888572537Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-db-migrator | kafka | [2024-04-25 12:39:24,526] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | ssl.keystore.location = null policy-apex-pdp | [2024-04-25T12:40:05.327+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.895490829Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.919042ms policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql kafka | [2024-04-25 12:39:24,530] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | ssl.keystore.password = null policy-apex-pdp | [2024-04-25T12:40:05.344+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.902357449Z level=info msg="Executing migration" id="create server_lock table" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,541] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | ssl.keystore.type = JKS policy-apex-pdp | [2024-04-25T12:40:05.432+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.903540195Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.181056ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 12:39:24,546] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-apex-pdp | [2024-04-25T12:40:05.448+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.909842848Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,547] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-apex-pdp | [2024-04-25T12:40:05.536+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.provider = null policy-db-migrator | kafka | [2024-04-25 12:39:24,557] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-apex-pdp | [2024-04-25T12:40:05.551+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.912002307Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.159049ms policy-pap | ssl.secure.random.implementation = null policy-db-migrator | kafka | [2024-04-25 12:39:24,558] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.641+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:26.916141511Z level=info msg="Executing migration" id="create user auth token table" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-25 12:39:24,563] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.654+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 27 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0660-toscaparameter.sql grafana | logger=migrator t=2024-04-25T12:39:26.917386728Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.247617ms policy-pap | ssl.truststore.certificates = null kafka | [2024-04-25 12:39:24,566] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.745+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:26.930601512Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | ssl.truststore.location = null kafka | [2024-04-25 12:39:24,569] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.756+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T12:39:26.931859209Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.253467ms policy-pap | ssl.truststore.password = null kafka | [2024-04-25 12:39:24,586] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | [2024-04-25T12:40:05.850+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:26.94185015Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | ssl.truststore.type = JKS kafka | [2024-04-25 12:39:24,593] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.859+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 29 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:26.943600064Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.750594ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 12:39:24,600] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:05.959+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:26.95079882Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | kafka | [2024-04-25 12:39:24,606] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-apex-pdp | [2024-04-25T12:40:05.965+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-04-25T12:39:26.952678724Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.879424ms policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-25 12:39:24,616] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-apex-pdp | [2024-04-25T12:40:06.063+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T12:39:26.962761787Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" kafka | [2024-04-25 12:39:24,616] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-apex-pdp | [2024-04-25T12:40:06.073+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 31 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801053 grafana | logger=migrator t=2024-04-25T12:39:26.971933828Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.172431ms kafka | [2024-04-25 12:39:24,616] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:06.166+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-25T12:39:26.977959857Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-04-25 12:39:24,617] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.176+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher grafana | logger=migrator t=2024-04-25T12:39:26.980357639Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.402112ms kafka | [2024-04-25 12:39:24,617] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.270+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cc81ea1 grafana | logger=migrator t=2024-04-25T12:39:26.989426118Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-04-25 12:39:24,618] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-apex-pdp | [2024-04-25T12:40:06.280+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 33 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-25T12:39:26.990324901Z level=info msg="Migration successfully executed" id="create cache_data table" duration=898.033µs kafka | [2024-04-25 12:39:24,620] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:06.374+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-25T12:39:26.998790083Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-04-25 12:39:24,621] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | [2024-04-25T12:40:06.383+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-25T12:39:27.000639357Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.848304ms kafka | [2024-04-25 12:39:24,621] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:06.479+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T12:39:27.008647642Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-04-25 12:39:24,622] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.510+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 35 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T12:39:27.010298005Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.652643ms kafka | [2024-04-25 12:39:24,622] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.582+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-25T12:39:27.021748116Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-04-25 12:39:24,626] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:06.614+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T12:39:27.023887854Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.138848ms kafka | [2024-04-25 12:39:24,630] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-apex-pdp | [2024-04-25T12:40:06.685+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-25T12:39:27.39278261Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-25 12:39:24,633] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:06.718+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 37 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T12:39:27.392983363Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=203.823µs kafka | [2024-04-25 12:39:24,634] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-apex-pdp | [2024-04-25T12:40:06.792+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | client.id = consumer-policy-pap-4 grafana | logger=migrator t=2024-04-25T12:39:27.476399474Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-04-25 12:39:24,640] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:06.822+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | client.rack = grafana | logger=migrator t=2024-04-25T12:39:27.476565206Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=169.392µs kafka | [2024-04-25 12:39:24,640] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.896+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T12:39:27.728396557Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-04-25 12:39:24,640] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:06.927+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 39 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T12:39:27.729992229Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.598942ms kafka | [2024-04-25 12:39:24,641] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-apex-pdp | [2024-04-25T12:40:07.003+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-25T12:39:27.886959581Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-04-25 12:39:24,642] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:07.031+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-25T12:39:27.888988408Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.031377ms kafka | [2024-04-25 12:39:24,644] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-apex-pdp | [2024-04-25T12:40:07.108+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-25T12:39:28.037435976Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,645] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | [2024-04-25T12:40:07.134+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 41 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T12:39:28.039123208Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.689762ms policy-db-migrator | kafka | [2024-04-25 12:39:24,646] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | [2024-04-25T12:40:07.210+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-25T12:39:28.054732744Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-db-migrator | kafka | [2024-04-25 12:39:24,646] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:07.238+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-25T12:39:28.054861746Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=126.751µs policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-04-25 12:39:24,646] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) policy-apex-pdp | [2024-04-25T12:40:07.315+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-25T12:39:28.063481199Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,648] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) policy-apex-pdp | [2024-04-25T12:40:07.343+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 43 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-25T12:39:28.064899387Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.418448ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. policy-apex-pdp | [2024-04-25T12:40:07.419+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T12:39:28.068773929Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) policy-apex-pdp | [2024-04-25T12:40:07.447+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-25T12:39:28.069715241Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=942.442µs policy-db-migrator | kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) policy-apex-pdp | [2024-04-25T12:40:07.521+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-25T12:39:28.073771685Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-db-migrator | kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) policy-apex-pdp | [2024-04-25T12:40:07.551+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 45 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-25T12:39:28.074799198Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.027043ms policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) policy-apex-pdp | [2024-04-25T12:40:07.625+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-25T12:39:28.080170219Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,658] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | [2024-04-25T12:40:07.655+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-25T12:39:28.082060673Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.893554ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-25 12:39:24,658] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:07.727+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-25T12:39:28.088792983Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,660] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) policy-apex-pdp | [2024-04-25T12:40:07.760+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 47 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-04-25T12:39:28.095652413Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.85773ms policy-db-migrator | kafka | [2024-04-25 12:39:24,660] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | [2024-04-25T12:40:07.830+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T12:39:28.105964439Z level=info msg="Executing migration" id="drop alert_definition table" policy-db-migrator | kafka | [2024-04-25 12:39:24,660] INFO Kafka startTimeMs: 1714048764652 (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | [2024-04-25T12:40:07.863+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T12:39:28.107290556Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.329817ms policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:07.933+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T12:39:28.11291126Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:07.966+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 49 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T12:39:28.113201975Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=290.045µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.037+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T12:39:28.117941597Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,662] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-apex-pdp | [2024-04-25T12:40:08.070+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-25T12:39:28.119356396Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.41673ms policy-db-migrator | kafka | [2024-04-25 12:39:24,663] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.141+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-25T12:39:28.169971623Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-db-migrator | kafka | [2024-04-25 12:39:24,677] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.182+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 51 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T12:39:28.172671929Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.707685ms policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-04-25 12:39:24,762] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-apex-pdp | [2024-04-25T12:40:08.244+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T12:39:28.18190061Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,820] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | [2024-04-25T12:40:08.285+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T12:39:28.18341042Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.50946ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) kafka | [2024-04-25 12:39:24,825] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:08.349+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T12:39:28.213911792Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-db-migrator | -------------- kafka | [2024-04-25 12:39:24,863] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | [2024-04-25T12:40:08.389+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 53 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-25T12:39:28.214251267Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=342.825µs policy-db-migrator | kafka | [2024-04-25 12:39:29,679] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.460+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-25T12:39:28.221668284Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | kafka | [2024-04-25 12:39:29,680] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.493+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-25T12:39:28.223196925Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.531301ms policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-04-25 12:40:01,537] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-apex-pdp | [2024-04-25T12:40:08.563+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T12:39:28.227153136Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:01,538] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-apex-pdp | [2024-04-25T12:40:08.598+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 55 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-04-25T12:39:28.228538915Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.385669ms kafka | [2024-04-25 12:40:01,810] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.667+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.235865271Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-04-25 12:40:02,019] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-25T12:40:08.701+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.237535943Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.672312ms policy-apex-pdp | [2024-04-25T12:40:08.770+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,085] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(HOyl9LomSW2VRWzaH4p5QQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(hlyPC_3zQpGmePqsd4AOeA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.245298706Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-apex-pdp | [2024-04-25T12:40:08.804+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 57 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,086] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-pap | sasl.login.class = null policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-25T12:39:28.246667314Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.371457ms policy-apex-pdp | [2024-04-25T12:40:08.874+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,089] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.250851729Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-apex-pdp | [2024-04-25T12:40:08.908+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T12:39:28.260775059Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.9183ms policy-apex-pdp | [2024-04-25T12:40:08.979+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.267904774Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.012+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 59 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.269688837Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.787073ms policy-apex-pdp | [2024-04-25T12:40:09.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.277062234Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.117+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-04-25T12:39:28.278513114Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.45431ms policy-apex-pdp | [2024-04-25T12:40:09.186+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.288662737Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.220+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 61 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-04-25T12:39:28.315003755Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.340058ms policy-apex-pdp | [2024-04-25T12:40:09.290+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.324893595Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.323+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.349902024Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.013089ms policy-apex-pdp | [2024-04-25T12:40:09.393+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.353914228Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.426+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 63 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-04-25T12:39:28.354669998Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=757.68µs policy-apex-pdp | [2024-04-25T12:40:09.497+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.358234884Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.532+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-04-25T12:39:28.358976714Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=741.86µs policy-apex-pdp | [2024-04-25T12:40:09.602+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.365328398Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-apex-pdp | [2024-04-25T12:40:09.635+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 65 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.371374868Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.04591ms policy-apex-pdp | [2024-04-25T12:40:09.706+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.376700378Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-apex-pdp | [2024-04-25T12:40:09.742+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-04-25T12:39:28.381243387Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.537889ms policy-apex-pdp | [2024-04-25T12:40:09.810+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.38594989Z level=info msg="Executing migration" id="create alert_rule table" policy-apex-pdp | [2024-04-25T12:40:09.844+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 67 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T12:39:28.387299617Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.350027ms policy-apex-pdp | [2024-04-25T12:40:09.913+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.providers = null kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.393343398Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-apex-pdp | [2024-04-25T12:40:09.947+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.394491462Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.150294ms policy-apex-pdp | [2024-04-25T12:40:10.015+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | session.timeout.ms = 45000 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.398793719Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-apex-pdp | [2024-04-25T12:40:10.050+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 69 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-04-25T12:39:28.400733484Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.940975ms policy-apex-pdp | [2024-04-25T12:40:10.118+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.405326905Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-apex-pdp | [2024-04-25T12:40:10.156+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.cipher.suites = null kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-04-25T12:39:28.406851095Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.52688ms policy-apex-pdp | [2024-04-25T12:40:10.222+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.419670014Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-apex-pdp | [2024-04-25T12:40:10.260+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 71 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.420043919Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=378.915µs policy-apex-pdp | [2024-04-25T12:40:10.326+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.engine.factory.class = null kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.428416899Z level=info msg="Executing migration" id="add column for to alert_rule" policy-apex-pdp | [2024-04-25T12:40:10.364+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.key.password = null kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-04-25T12:39:28.437415178Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.993509ms policy-apex-pdp | [2024-04-25T12:40:10.429+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | [2024-04-25T12:40:10.467+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 73 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T12:39:28.443272375Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-pap | ssl.keystore.key = null policy-apex-pdp | [2024-04-25T12:40:10.532+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.45122808Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.940905ms policy-pap | ssl.keystore.location = null policy-apex-pdp | [2024-04-25T12:40:10.572+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.455444296Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-pap | ssl.keystore.password = null policy-apex-pdp | [2024-04-25T12:40:10.635+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.46109361Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.646344ms policy-pap | ssl.keystore.type = JKS policy-apex-pdp | [2024-04-25T12:40:10.674+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 75 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0820-toscatrigger.sql grafana | logger=migrator t=2024-04-25T12:39:28.470554275Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | [2024-04-25T12:40:10.738+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.472370669Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.816304ms policy-pap | ssl.provider = null policy-apex-pdp | [2024-04-25T12:40:10.777+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T12:39:28.479717045Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-04-25T12:40:10.841+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.481028423Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.312998ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:10.880+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 77 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.558378573Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:10.945+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.564365172Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.983599ms policy-pap | ssl.truststore.location = null kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:10.983+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-25T12:39:28.663680601Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | ssl.truststore.password = null kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:11.048+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.671880099Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.197848ms policy-pap | ssl.truststore.type = JKS kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | [2024-04-25T12:40:11.086+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 79 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.678952152Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-apex-pdp | [2024-04-25T12:40:11.152+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.680493752Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.54504ms policy-apex-pdp | [2024-04-25T12:40:11.189+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | [2024-04-25T12:40:01.059+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.684741889Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-apex-pdp | [2024-04-25T12:40:11.254+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 158 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.694943143Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.193924ms policy-apex-pdp | [2024-04-25T12:40:11.292+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 81 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801059 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.703189741Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-apex-pdp | [2024-04-25T12:40:11.357+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 160 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.708049846Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.857894ms policy-apex-pdp | [2024-04-25T12:40:11.395+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.716342445Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-apex-pdp | [2024-04-25T12:40:11.460+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 162 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.716465036Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=117.561µs policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | [2024-04-25T12:40:11.498+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 83 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.725491165Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | [2024-04-25T12:40:11.563+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 164 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.726942795Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.45341ms policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a68cc94-8fc9-4290-b3af-12928780cd05, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-04-25T12:40:11.601+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.735107373Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | [2024-04-25T12:40:01.076+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | [2024-04-25T12:40:11.668+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 166 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.73648946Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.383247ms policy-pap | acks = -1 policy-apex-pdp | [2024-04-25T12:40:11.704+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 85 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.741936202Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-04-25T12:40:11.771+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 168 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.743201229Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.262997ms policy-pap | batch.size = 16384 policy-apex-pdp | [2024-04-25T12:40:11.809+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.748521209Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | [2024-04-25T12:40:11.876+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 170 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:28.74862304Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=104.451µs policy-pap | buffer.memory = 33554432 policy-apex-pdp | [2024-04-25T12:40:11.912+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 87 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-25T12:39:28.752757095Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-04-25T12:40:11.981+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 172 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.758817955Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.05758ms kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.id = producer-1 policy-apex-pdp | [2024-04-25T12:40:12.017+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) grafana | logger=migrator t=2024-04-25T12:39:28.776289545Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | compression.type = none policy-apex-pdp | [2024-04-25T12:40:12.087+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 174 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.782428796Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.13622ms kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | [2024-04-25T12:40:12.120+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 89 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.790038456Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-apex-pdp | [2024-04-25T12:40:12.191+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 176 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.796034185Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.992439ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | enable.idempotence = true policy-apex-pdp | [2024-04-25T12:40:12.224+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-25T12:39:28.801404196Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | interceptor.classes = [] policy-apex-pdp | [2024-04-25T12:40:12.295+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 178 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.807535837Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.12094ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | [2024-04-25T12:40:12.328+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 91 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) grafana | logger=migrator t=2024-04-25T12:39:28.8115519Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | linger.ms = 0 policy-apex-pdp | [2024-04-25T12:40:12.400+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 180 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:28.818868586Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.309966ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | max.block.ms = 60000 policy-apex-pdp | [2024-04-25T12:40:12.431+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:28.867590559Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:12.502+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 182 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.86770924Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=122.401µs kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | max.request.size = 1048576 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-apex-pdp | [2024-04-25T12:40:12.534+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 93 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.874225296Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:12.604+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 184 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.87525471Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.031604ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-apex-pdp | [2024-04-25T12:40:12.649+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.882634157Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:12.707+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 186 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.889799402Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.162635ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:12.752+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 95 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.893825204Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:12.809+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 188 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.893919245Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=96.691µs kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-apex-pdp | [2024-04-25T12:40:12.856+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.898067651Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:12.912+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 190 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.909466311Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=11.39232ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-apex-pdp | [2024-04-25T12:40:12.959+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 97 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.917032871Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.class = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:13.016+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 192 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.918264747Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.234116ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.061+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.924522779Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.119+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 194 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.93297899Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.457291ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-apex-pdp | [2024-04-25T12:40:13.164+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 99 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.938582474Z level=info msg="Executing migration" id=create_ngalert_configuration_table kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:13.220+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 196 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.939647769Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.067555ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-apex-pdp | [2024-04-25T12:40:13.268+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.944703915Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:13.323+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 198 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.946549449Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.842714ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.371+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 101 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.951647877Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.426+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 200 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.961389075Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.737058ms kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-apex-pdp | [2024-04-25T12:40:13.474+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.966795516Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:13.528+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 202 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:28.967661888Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=868.232µs kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-apex-pdp | [2024-04-25T12:40:13.578+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 103 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-25T12:39:28.976461634Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:13.631+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 204 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:28.978718704Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.25974ms policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.682+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T12:39:28.987053084Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:13.734+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 206 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T12:39:28.988243859Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.193285ms policy-apex-pdp | [2024-04-25T12:40:13.785+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 105 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-25T12:39:29.132678132Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-apex-pdp | [2024-04-25T12:40:13.838+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 208 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.134022429Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.348607ms policy-apex-pdp | [2024-04-25T12:40:13.888+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-04-25T12:39:29.146612115Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-apex-pdp | [2024-04-25T12:40:13.940+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 210 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.14691112Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=298.235µs policy-apex-pdp | [2024-04-25T12:40:13.991+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 107 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.152515273Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-apex-pdp | [2024-04-25T12:40:14.042+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 212 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-25T12:39:29.154205245Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.685372ms policy-apex-pdp | [2024-04-25T12:40:14.099+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.161896637Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-apex-pdp | [2024-04-25T12:40:14.145+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 214 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-25 12:40:02,102] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-04-25T12:39:29.163109682Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.209705ms policy-apex-pdp | [2024-04-25T12:40:14.203+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 109 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.167398299Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-apex-pdp | [2024-04-25T12:40:14.248+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 216 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-04-25T12:39:29.1682777Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-apex-pdp | [2024-04-25T12:40:14.306+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.172266634Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-apex-pdp | [2024-04-25T12:40:14.350+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 218 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.173107284Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=841.2µs policy-apex-pdp | [2024-04-25T12:40:14.410+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 111 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.178450465Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-apex-pdp | [2024-04-25T12:40:14.452+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 220 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-25T12:39:29.179815832Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.364697ms policy-apex-pdp | [2024-04-25T12:40:14.513+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.184392803Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-apex-pdp | [2024-04-25T12:40:14.555+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 222 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-04-25T12:39:29.193779466Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.388703ms policy-apex-pdp | [2024-04-25T12:40:14.616+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 113 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.197112681Z level=info msg="Executing migration" id="create library_element table v1" policy-apex-pdp | [2024-04-25T12:40:14.659+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 224 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.197977462Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=864.791µs policy-apex-pdp | [2024-04-25T12:40:14.720+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.206275591Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-apex-pdp | [2024-04-25T12:40:14.761+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 226 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-25T12:39:29.207679339Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.406538ms policy-apex-pdp | [2024-04-25T12:40:14.823+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 115 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.providers = null kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.21148184Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-apex-pdp | [2024-04-25T12:40:14.864+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 228 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T12:39:29.212775117Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.291417ms policy-apex-pdp | [2024-04-25T12:40:14.927+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.21755667Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-apex-pdp | [2024-04-25T12:40:14.966+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 230 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.219506666Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.949316ms policy-apex-pdp | [2024-04-25T12:40:15.030+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 117 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.cipher.suites = null kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.224947856Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-apex-pdp | [2024-04-25T12:40:15.069+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 232 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-25T12:39:29.226333365Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.384879ms policy-apex-pdp | [2024-04-25T12:40:15.133+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.23054552Z level=info msg="Executing migration" id="increase max description length to 2048" policy-apex-pdp | [2024-04-25T12:40:15.172+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 234 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.engine.factory.class = null kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T12:39:29.230635591Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=93.061µs policy-apex-pdp | [2024-04-25T12:40:15.236+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 119 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.key.password = null kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.23509879Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-apex-pdp | [2024-04-25T12:40:15.275+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 236 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.235278673Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=180.013µs policy-apex-pdp | [2024-04-25T12:40:15.339+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.239840703Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-apex-pdp | [2024-04-25T12:40:15.377+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 238 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keystore.key = null policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.240313859Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=473.106µs policy-apex-pdp | [2024-04-25T12:40:15.443+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 121 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.248947783Z level=info msg="Executing migration" id="create data_keys table" policy-apex-pdp | [2024-04-25T12:40:15.480+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 240 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keystore.password = null policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.250695706Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.748433ms policy-apex-pdp | [2024-04-25T12:40:15.547+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.2562556Z level=info msg="Executing migration" id="create secrets table" policy-apex-pdp | [2024-04-25T12:40:15.583+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 242 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.257227582Z level=info msg="Migration successfully executed" id="create secrets table" duration=972.082µs policy-pap | ssl.provider = null policy-apex-pdp | [2024-04-25T12:40:15.651+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 123 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.261157333Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-04-25T12:40:15.685+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 244 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.292869972Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.711279ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-04-25T12:40:15.755+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.297830066Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | ssl.truststore.certificates = null policy-apex-pdp | [2024-04-25T12:40:15.788+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 246 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.303133596Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.3035ms policy-pap | ssl.truststore.location = null policy-apex-pdp | [2024-04-25T12:40:15.858+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 125 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.308490117Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | ssl.truststore.password = null policy-apex-pdp | [2024-04-25T12:40:15.890+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 248 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.30874169Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=251.293µs policy-pap | ssl.truststore.type = JKS policy-apex-pdp | [2024-04-25T12:40:15.960+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.314527626Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-pap | transaction.timeout.ms = 60000 policy-apex-pdp | [2024-04-25T12:40:15.993+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 250 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.348435494Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.905198ms policy-pap | transactional.id = null policy-apex-pdp | [2024-04-25T12:40:16.066+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 127 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.354624725Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | [2024-04-25T12:40:16.096+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 252 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.387303375Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.67765ms policy-pap | policy-apex-pdp | [2024-04-25T12:40:16.169+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.087+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-25T12:39:29.396628498Z level=info msg="Executing migration" id="create kv_store table v1" policy-apex-pdp | [2024-04-25T12:40:16.199+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 254 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T12:39:29.398103917Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.478419ms policy-apex-pdp | [2024-04-25T12:40:16.272+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 129 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T12:39:29.402251742Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-apex-pdp | [2024-04-25T12:40:16.302+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 256 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801103 grafana | logger=migrator t=2024-04-25T12:39:29.404281738Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.030296ms policy-apex-pdp | [2024-04-25T12:40:16.376+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a68cc94-8fc9-4290-b3af-12928780cd05, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-25T12:39:29.460359927Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-apex-pdp | [2024-04-25T12:40:16.471+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 258 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=14d92362-e0b3-4597-b9c4-41b06f6af1c6, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-25T12:39:29.460880355Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=520.578µs policy-apex-pdp | [2024-04-25T12:40:16.478+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 131 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-25T12:39:29.46889952Z level=info msg="Executing migration" id="create permission table" policy-apex-pdp | [2024-04-25T12:40:16.574+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 260 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | acks = -1 grafana | logger=migrator t=2024-04-25T12:39:29.470489431Z level=info msg="Migration successfully executed" id="create permission table" duration=1.589741ms policy-apex-pdp | [2024-04-25T12:40:16.580+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T12:39:29.478230183Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-apex-pdp | [2024-04-25T12:40:16.679+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 262 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-04-25T12:39:29.479379717Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.150204ms policy-apex-pdp | [2024-04-25T12:40:16.683+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 133 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T12:39:29.485366206Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-apex-pdp | [2024-04-25T12:40:16.782+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 264 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-04-25T12:39:29.487651806Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.28582ms policy-apex-pdp | [2024-04-25T12:40:16.786+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T12:39:29.493464123Z level=info msg="Executing migration" id="create role table" policy-apex-pdp | [2024-04-25T12:40:16.886+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 266 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | client.id = producer-2 grafana | logger=migrator t=2024-04-25T12:39:29.494590478Z level=info msg="Migration successfully executed" id="create role table" duration=1.125745ms policy-apex-pdp | [2024-04-25T12:40:16.889+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 135 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | compression.type = none grafana | logger=migrator t=2024-04-25T12:39:29.498480699Z level=info msg="Executing migration" id="add column display_name" policy-apex-pdp | [2024-04-25T12:40:16.989+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 268 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T12:39:29.506246482Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.760233ms policy-apex-pdp | [2024-04-25T12:40:16.992+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-25T12:39:29.50993194Z level=info msg="Executing migration" id="add column group_name" policy-apex-pdp | [2024-04-25T12:40:17.093+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 270 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-25T12:39:29.517843984Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.911304ms policy-apex-pdp | [2024-04-25T12:40:17.095+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 137 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T12:39:29.523472509Z level=info msg="Executing migration" id="add index role.org_id" policy-apex-pdp | [2024-04-25T12:40:17.197+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 272 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T12:39:29.524549973Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.077433ms policy-apex-pdp | [2024-04-25T12:40:17.200+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-25T12:39:29.528598836Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-apex-pdp | [2024-04-25T12:40:17.299+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 274 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-25T12:39:29.530079725Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.480799ms policy-apex-pdp | [2024-04-25T12:40:17.304+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 139 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-25T12:39:29.533722904Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-apex-pdp | [2024-04-25T12:40:17.403+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 276 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-25T12:39:29.534837298Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.116214ms policy-apex-pdp | [2024-04-25T12:40:17.409+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T12:39:29.547795179Z level=info msg="Executing migration" id="create team role table" policy-apex-pdp | [2024-04-25T12:40:17.507+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 278 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-25T12:39:29.549340219Z level=info msg="Migration successfully executed" id="create team role table" duration=1.54472ms policy-apex-pdp | [2024-04-25T12:40:17.511+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 141 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T12:39:29.55472763Z level=info msg="Executing migration" id="add index team_role.org_id" policy-apex-pdp | [2024-04-25T12:40:17.609+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 280 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T12:39:29.556618315Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.890405ms policy-apex-pdp | [2024-04-25T12:40:17.615+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T12:39:29.560604117Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-apex-pdp | [2024-04-25T12:40:17.713+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 282 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.561868765Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.264438ms policy-apex-pdp | [2024-04-25T12:40:17.717+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 143 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.641192479Z level=info msg="Executing migration" id="add index team_role.team_id" policy-apex-pdp | [2024-04-25T12:40:17.816+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 285 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.643249856Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.058177ms policy-apex-pdp | [2024-04-25T12:40:17.820+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | partitioner.class = null kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.655084642Z level=info msg="Executing migration" id="create user role table" policy-apex-pdp | [2024-04-25T12:40:17.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 287 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | partitioner.ignore.keys = false kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.656167026Z level=info msg="Migration successfully executed" id="create user role table" duration=1.081154ms policy-apex-pdp | [2024-04-25T12:40:17.922+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 145 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.674217994Z level=info msg="Executing migration" id="add index user_role.org_id" policy-apex-pdp | [2024-04-25T12:40:18.022+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 289 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.676251251Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.033607ms policy-apex-pdp | [2024-04-25T12:40:18.025+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.683889421Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-apex-pdp | [2024-04-25T12:40:18.126+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 291 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.685365531Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.4786ms policy-pap | request.timeout.ms = 30000 policy-apex-pdp | [2024-04-25T12:40:18.128+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 147 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.693985775Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | retries = 2147483647 policy-apex-pdp | [2024-04-25T12:40:18.229+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.69517818Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.196665ms policy-pap | retry.backoff.ms = 100 policy-apex-pdp | [2024-04-25T12:40:18.230+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 293 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.704198229Z level=info msg="Executing migration" id="create builtin role table" policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-04-25T12:40:18.330+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 149 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.705860311Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.662241ms policy-pap | sasl.jaas.config = null policy-apex-pdp | [2024-04-25T12:40:18.333+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 295 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.711526095Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | [2024-04-25T12:40:18.434+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.712831093Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.305578ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-04-25T12:40:18.436+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 297 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.716095046Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | [2024-04-25T12:40:18.537+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 151 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.717248421Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.153494ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | [2024-04-25T12:40:18.539+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 299 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.720612605Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | [2024-04-25T12:40:18.640+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.728839753Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.227109ms policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | [2024-04-25T12:40:18.642+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 301 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.733005098Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-pap | sasl.login.class = null policy-apex-pdp | [2024-04-25T12:40:18.743+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 153 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.734363096Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.357268ms kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-04-25T12:40:18.745+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 303 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.738520791Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-apex-pdp | [2024-04-25T12:40:18.847+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 305 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:29.739649966Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.128605ms kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:18.848+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:29.748578013Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-apex-pdp | [2024-04-25T12:40:18.949+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 307 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.750156354Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.577241ms policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:18.951+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 155 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.754694134Z level=info msg="Executing migration" id="add unique index role.uid" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:19.052+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 309 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.756488817Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.794723ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:19.056+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.765332604Z level=info msg="Executing migration" id="create seed assignment table" policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-apex-pdp | [2024-04-25T12:40:19.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 311 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.766165335Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=841.171µs policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:19.157+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 157 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.774123449Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-apex-pdp | [2024-04-25T12:40:19.266+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.775983114Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.859125ms policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:19.273+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] (Re-)joining group kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.783455613Z level=info msg="Executing migration" id="add column hidden to role table" policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Request joining group due to: need to re-join with the given member-id: consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.791428928Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.971775ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.79536221Z level=info msg="Executing migration" id="permission kind migration" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] (Re-)joining group kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.803394556Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.034425ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:22.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e', protocol='range'} kafka | [2024-04-25 12:40:03,946] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.80682643Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-apex-pdp | [2024-04-25T12:40:22.336+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Finished assignment for group at generation 1: {consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-25 12:40:03,948] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.813110223Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.280653ms policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:22.370+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e', protocol='range'} kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.818544775Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:22.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.864335278Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=45.789613ms policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:22.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.90318776Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | security.protocol = PLAINTEXT policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-apex-pdp | [2024-04-25T12:40:22.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.905574101Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=2.383581ms policy-pap | security.providers = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:22.407+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.911920535Z level=info msg="Executing migration" id="add permission action scope role_id index" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:22.843+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.914722922Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.804697ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.919122409Z level=info msg="Executing migration" id="remove permission role_id action scope index" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-apex-pdp | [2024-04-25T12:40:22.902+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:29.920617249Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.49506ms policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-25T12:39:29.928016076Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-apex-pdp | [2024-04-25T12:40:22.905+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-25T12:39:29.929573007Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.556071ms kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:23.404+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T12:39:29.937716045Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T12:39:29.938817249Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.101134ms kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:23.410+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher grafana | logger=migrator t=2024-04-25T12:39:29.942819982Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-apex-pdp | [2024-04-25T12:40:23.410+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T12:39:29.943047845Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=227.933µs kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-25T12:39:29.947872349Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-apex-pdp | [2024-04-25T12:40:23.411+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T12:39:29.948022631Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.891µs kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:29.953941459Z level=info msg="Executing migration" id="teams permissions migration" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.95479831Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=858.142µs policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:29.961236024Z level=info msg="Executing migration" id="dashboard permissions" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-25T12:39:29.961887123Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=652.039µs policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T12:39:29.967859492Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T12:39:29.968573491Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=714.189µs policy-db-migrator | JOIN pdpstatistics b policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T12:39:29.972653764Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-apex-pdp | [2024-04-25T12:40:23.640+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T12:39:29.972938978Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=284.644µs policy-db-migrator | SET a.id = b.id policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T12:39:29.98218395Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:23.643+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T12:39:29.983065052Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=881.672µs policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T12:39:29.993629981Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:23.652+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | transactional.id = null grafana | logger=migrator t=2024-04-25T12:39:29.995068529Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.437768ms policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T12:39:30.002043012Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T12:40:23.652+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | grafana | logger=migrator t=2024-04-25T12:39:30.004274121Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.233029ms policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-apex-pdp | [2024-04-25T12:40:23.691+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.105+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-25T12:39:30.014608687Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | -------------- policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T12:39:30.022776004Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.164217ms policy-db-migrator | policy-apex-pdp | [2024-04-25T12:40:23.693+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T12:39:30.033169771Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801108 grafana | logger=migrator t=2024-04-25T12:39:30.033385284Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=215.363µs policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-apex-pdp | [2024-04-25T12:40:23.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=14d92362-e0b3-4597-b9c4-41b06f6af1c6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-25T12:39:30.044559751Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator grafana | logger=migrator t=2024-04-25T12:39:30.047008994Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.446872ms policy-apex-pdp | [2024-04-25T12:40:23.709+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher grafana | logger=migrator t=2024-04-25T12:39:30.062864562Z level=info msg="Executing migration" id="add index correlations.uid" policy-apex-pdp | [2024-04-25T12:40:56.182+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.4 - policyadmin [25/Apr/2024:12:40:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.109+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher grafana | logger=migrator t=2024-04-25T12:39:30.066397329Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=3.531887ms policy-apex-pdp | [2024-04-25T12:41:56.083+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.4 - policyadmin [25/Apr/2024:12:41:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.51.2" policy-db-migrator | kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.110+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers grafana | logger=migrator t=2024-04-25T12:39:30.071249883Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.112+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers grafana | logger=migrator t=2024-04-25T12:39:30.072636411Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.386698ms policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-04-25T12:39:30.109708828Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock grafana | logger=migrator t=2024-04-25T12:39:30.122800861Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.092923ms policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-04-25T12:39:30.131031099Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-04-25T12:39:30.132473548Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.442159ms policy-db-migrator | kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.114+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer grafana | logger=migrator t=2024-04-25T12:39:30.136691384Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.116+00:00|INFO|ServiceManager|main] Policy PAP started grafana | logger=migrator t=2024-04-25T12:39:30.137810708Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.118564ms policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.117+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.661 seconds (process running for 10.255) grafana | logger=migrator t=2024-04-25T12:39:30.149118607Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.520+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:30.175859579Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=26.743132ms policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.520+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6HLElDkITkKpDhaqvETosg grafana | logger=migrator t=2024-04-25T12:39:30.181373512Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.521+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Cluster ID: 6HLElDkITkKpDhaqvETosg grafana | logger=migrator t=2024-04-25T12:39:30.182384085Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.010133ms policy-db-migrator | kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-25T12:40:01.523+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 6HLElDkITkKpDhaqvETosg grafana | logger=migrator t=2024-04-25T12:39:30.185515046Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | policy-pap | [2024-04-25T12:40:01.622+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,950] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.186470569Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=955.063µs policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | [2024-04-25T12:40:01.725+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,953] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.19343247Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:01.828+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.195513928Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.079968ms policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | [2024-04-25T12:40:01.931+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.204132951Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:02.032+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.206660135Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.529463ms policy-db-migrator | policy-pap | [2024-04-25T12:40:02.052+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.232374703Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | policy-pap | [2024-04-25T12:40:02.056+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.233121372Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=750.329µs policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | [2024-04-25T12:40:02.056+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 6HLElDkITkKpDhaqvETosg kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.240929315Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 grafana | logger=migrator t=2024-04-25T12:39:30.243263986Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=2.334211ms policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.168+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:30.252635259Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.228+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.261813971Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.178862ms policy-db-migrator | kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.265129564Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.883+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.266244158Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.114294ms policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:02.956+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.294548481Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:03.325+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.296821821Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.273511ms policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:03.628+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.304111457Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:03.924+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.304646904Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:03.934+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.30817953Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:04.029+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.309059152Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:04.040+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.317351661Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:04.134+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.31882527Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.47299ms policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:04.145+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.32491926Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:04.240+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.326184588Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.302958ms policy-db-migrator | policy-pap | [2024-04-25T12:40:04.249+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.331153092Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | policy-pap | [2024-04-25T12:40:04.345+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.332394559Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.241127ms policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-04-25T12:40:04.354+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.34083585Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:04.451+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.342630063Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.792063ms policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-04-25T12:40:04.459+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.347068912Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:04.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 27 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.348845926Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.779904ms policy-db-migrator | policy-pap | [2024-04-25T12:40:04.564+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.353851491Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | policy-pap | [2024-04-25T12:40:04.661+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 29 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.355143568Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.292167ms policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-04-25T12:40:04.669+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:04.766+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 31 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.360201645Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-04-25T12:40:04.774+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.361661144Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.458639ms policy-pap | [2024-04-25T12:40:04.872+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 33 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.366585559Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T12:40:04.878+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.367821734Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.237265ms policy-db-migrator | policy-pap | [2024-04-25T12:40:04.977+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 35 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.373656282Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-pap | [2024-04-25T12:40:04.982+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0150-toscaproperty.sql grafana | logger=migrator t=2024-04-25T12:39:30.374770517Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.112595ms kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:05.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 37 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.379265346Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | [2024-04-25T12:40:05.090+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.380492412Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.226756ms kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:05.186+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 39 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.387180669Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T12:40:05.195+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.389173666Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.993007ms kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:05.291+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 41 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.395708792Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.298+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.419407144Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.699372ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.396+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 43 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.429487446Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.403+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.442507818Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=13.021182ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.500+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 45 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.451095221Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.508+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.459596653Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.499952ms policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:05.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 47 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.466820638Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.614+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.4670026Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=182.142µs policy-db-migrator | kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.710+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 49 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.471892045Z level=info msg="Executing migration" id="add share column" policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.717+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.481962247Z level=info msg="Migration successfully executed" id="add share column" duration=10.073482ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.814+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 51 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.486958923Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T12:40:05.820+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.487181146Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=222.393µs policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-04-25T12:40:05.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 53 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.49812085Z level=info msg="Executing migration" id="create file table" policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-04-25T12:40:05.927+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.499789382Z level=info msg="Migration successfully executed" id="create file table" duration=1.668212ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | [2024-04-25T12:40:06.024+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 55 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.506136415Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-04-25T12:40:06.030+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.507991189Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.851794ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | [2024-04-25T12:40:06.135+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 57 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.514912071Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-04-25T12:40:06.135+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:30.516170027Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.258356ms policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-04-25T12:40:06.238+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 59 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:30.521086632Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | [2024-04-25T12:40:06.242+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.521929393Z level=info msg="Migration successfully executed" id="create file_meta table" duration=842.831µs policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-04-25T12:40:06.343+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 61 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.525682993Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-04-25T12:40:06.348+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.526875688Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.192455ms policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | [2024-04-25T12:40:06.447+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 63 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.580095168Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | [2024-04-25T12:40:06.453+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.580377642Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=285.784µs policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-04-25T12:40:06.551+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 65 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.588992765Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | [2024-04-25T12:40:06.557+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.589257659Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=264.574µs policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.599370662Z level=info msg="Executing migration" id="managed permissions migration" policy-pap | [2024-04-25T12:40:06.655+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 67 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.600563268Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.195216ms policy-pap | [2024-04-25T12:40:06.660+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.605518573Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-pap | [2024-04-25T12:40:06.761+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 69 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.605918228Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=402.125µs policy-pap | [2024-04-25T12:40:06.766+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.610691881Z level=info msg="Executing migration" id="RBAC action name migrator" policy-pap | [2024-04-25T12:40:06.864+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 71 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=migrator t=2024-04-25T12:39:30.612271892Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.581491ms policy-pap | [2024-04-25T12:40:06.870+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.619388735Z level=info msg="Executing migration" id="Add UID column to playlist" policy-pap | [2024-04-25T12:40:06.968+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 73 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.62813033Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.737845ms policy-pap | [2024-04-25T12:40:06.973+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.633741504Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-pap | [2024-04-25T12:40:07.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 75 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-04-25T12:39:30.634079069Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=339.985µs policy-pap | [2024-04-25T12:40:07.080+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.691110619Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | [2024-04-25T12:40:07.177+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 77 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-04-25T12:39:30.693658683Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.550994ms policy-pap | [2024-04-25T12:40:07.183+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.702334027Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | [2024-04-25T12:40:07.281+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 79 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.702951465Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=621.788µs policy-pap | [2024-04-25T12:40:07.286+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | msg grafana | logger=migrator t=2024-04-25T12:39:30.70944039Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-pap | [2024-04-25T12:40:07.385+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 81 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-04-25T12:39:30.709833536Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=395.946µs policy-pap | [2024-04-25T12:40:07.389+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.716882029Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | [2024-04-25T12:40:07.490+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 83 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-04-25T12:39:30.717567967Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=674.219µs policy-pap | [2024-04-25T12:40:07.493+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.724361467Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-pap | [2024-04-25T12:40:07.593+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 85 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-04-25T12:39:30.732853839Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.488622ms policy-pap | [2024-04-25T12:40:07.601+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.737316907Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-pap | [2024-04-25T12:40:07.697+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 87 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.745788829Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.461851ms policy-pap | [2024-04-25T12:40:07.704+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.751024987Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | [2024-04-25T12:40:07.800+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 89 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-04-25T12:40:07.810+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.752635849Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.612792ms kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:07.904+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 91 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:30.784370907Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-25T12:39:30.862882389Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.509952ms kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-04-25T12:40:07.914+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.867880286Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-04-25T12:40:08.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 93 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.868913389Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.035303ms kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-04-25T12:40:08.016+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.873722232Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-04-25T12:40:08.113+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 95 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-04-25T12:39:30.874730945Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.006613ms kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-04-25T12:40:08.121+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.884731887Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-pap | [2024-04-25T12:40:08.217+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 97 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.905711723Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=20.978316ms policy-pap | [2024-04-25T12:40:08.228+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.909011517Z level=info msg="Executing migration" id="add origin column to seed_assignment" policy-pap | [2024-04-25T12:40:08.321+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 99 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=migrator t=2024-04-25T12:39:30.915502682Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.487495ms policy-pap | [2024-04-25T12:40:08.333+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.920308406Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-pap | [2024-04-25T12:40:08.425+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 101 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-25T12:39:30.92065881Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=352.964µs policy-pap | [2024-04-25T12:40:08.436+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.926940413Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-pap | [2024-04-25T12:40:08.536+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 103 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:30.927192336Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=254.333µs policy-pap | [2024-04-25T12:40:08.538+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:30.931674265Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-pap | [2024-04-25T12:40:08.639+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-04-25 12:40:03,999] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-04-25T12:39:30.932177182Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=504.687µs policy-pap | [2024-04-25T12:40:08.642+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 105 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:03,999] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.938233941Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | [2024-04-25T12:40:08.744+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:04,036] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:30.938577546Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=347.235µs policy-db-migrator | policy-pap | [2024-04-25T12:40:08.744+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 107 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:04,047] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T12:39:30.94571675Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-db-migrator | policy-pap | [2024-04-25T12:40:08.846+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 109 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:04,048] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.946107465Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=394.035µs policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-pap | [2024-04-25T12:40:08.848+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:04,049] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.951745199Z level=info msg="Executing migration" id="create folder table" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:08.948+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:04,050] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.953132527Z level=info msg="Migration successfully executed" id="create folder table" duration=1.389418ms policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-04-25T12:40:08.949+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 111 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:04,395] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:30.95709201Z level=info msg="Executing migration" id="Add index for parent_uid" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:09.052+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 113 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:04,396] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T12:39:30.958481858Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.390478ms policy-db-migrator | policy-pap | [2024-04-25T12:40:09.053+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:04,396] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.963392522Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:09.155+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:04,396] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.96472451Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.333108ms policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | [2024-04-25T12:40:09.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 115 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:04,396] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.969433712Z level=info msg="Executing migration" id="Update folder title length" policy-db-migrator | -------------- kafka | [2024-04-25 12:40:04,723] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:30.969482622Z level=info msg="Migration successfully executed" id="Update folder title length" duration=51.2µs policy-pap | [2024-04-25T12:40:09.258+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 117 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | kafka | [2024-04-25 12:40:04,724] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T12:39:30.974997235Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | [2024-04-25T12:40:09.263+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:04,725] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.976355664Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.359909ms policy-pap | [2024-04-25T12:40:09.362+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 119 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-04-25 12:40:04,725] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.981215757Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | [2024-04-25T12:40:09.366+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:04,725] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:30.983126252Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.912245ms policy-pap | [2024-04-25T12:40:09.468+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | kafka | [2024-04-25 12:40:05,183] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:30.989307873Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | [2024-04-25T12:40:09.471+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 121 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-25 12:40:05,183] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T12:39:30.991262419Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.955616ms policy-pap | [2024-04-25T12:40:09.572+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 123 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-04-25 12:40:05,183] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.998322692Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | [2024-04-25T12:40:09.573+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-25 12:40:05,184] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:30.999275095Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=954.184µs policy-pap | [2024-04-25T12:40:09.674+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 125 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-04-25 12:40:05,184] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:31.003392688Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:09.686+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:05,913] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:31.003981927Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=588.389µs policy-db-migrator | policy-pap | [2024-04-25T12:40:09.778+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 127 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:05,914] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-25T12:39:31.105012955Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:09.789+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:05,914] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:31.106044739Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.034794ms policy-db-migrator | DROP TABLE pdpstatistics policy-pap | [2024-04-25T12:40:09.881+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 129 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:05,914] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:31.110561378Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:09.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:05,914] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:31.11148378Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=920.402µs policy-db-migrator | policy-pap | [2024-04-25T12:40:09.984+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 131 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:06,577] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:31.115829947Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" policy-db-migrator | policy-pap | [2024-04-25T12:40:09.994+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:06,578] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-04-25T12:40:10.088+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 133 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:06,578] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:31.117845873Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.015776ms policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:10.097+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:06,578] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-25T12:39:31.125686017Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | [2024-04-25T12:40:10.191+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 135 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:06,578] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-25T12:39:31.128297821Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.607844ms policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:10.199+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,129] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-25T12:39:31.132903082Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" policy-db-migrator | policy-pap | [2024-04-25T12:40:10.294+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 137 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,130] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-04-25T12:40:10.302+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.134925968Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=2.019726ms kafka | [2024-04-25 12:40:07,130] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | [2024-04-25T12:40:10.398+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 139 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.140346549Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-04-25 12:40:07,130] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-25T12:40:10.405+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,131] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-04-25T12:39:31.141310833Z level=info msg="Migration successfully executed" id="create anon_device table" duration=964.134µs policy-pap | [2024-04-25T12:40:10.501+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 141 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,840] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T12:39:31.174996836Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | [2024-04-25T12:40:10.507+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,841] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-25T12:39:31.177487248Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.486621ms policy-pap | [2024-04-25T12:40:10.605+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 143 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,841] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-04-25T12:39:31.183167482Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | [2024-04-25T12:40:10.611+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,841] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | name version grafana | logger=migrator t=2024-04-25T12:39:31.184763933Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.597071ms policy-pap | [2024-04-25T12:40:10.708+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 145 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,841] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-04-25T12:39:31.189922572Z level=info msg="Executing migration" id="create signing_key table" policy-pap | [2024-04-25T12:40:10.714+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,976] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-04-25T12:39:31.191208818Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.285766ms policy-pap | [2024-04-25T12:40:10.811+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 147 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,977] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:22 grafana | logger=migrator t=2024-04-25T12:39:31.198034098Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | [2024-04-25T12:40:10.817+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,977] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.199252674Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.218906ms policy-pap | [2024-04-25T12:40:10.914+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 149 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,977] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.203212777Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | [2024-04-25T12:40:10.920+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:07,977] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.204543514Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.330898ms policy-pap | [2024-04-25T12:40:11.018+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 151 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:08,566] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.209385407Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | [2024-04-25T12:40:11.023+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:08,566] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.209759312Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=375.105µs policy-pap | [2024-04-25T12:40:11.121+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 153 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:08,566] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.215253305Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | [2024-04-25T12:40:11.126+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:08,566] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.226010166Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.755771ms policy-pap | [2024-04-25T12:40:11.229+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 155 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:08,566] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.231689881Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-04-25 12:40:08,691] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:11.230+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.233019178Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.334317ms kafka | [2024-04-25 12:40:08,692] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:11.332+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.239217869Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-25 12:40:08,693] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:11.333+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 157 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.241368087Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.150598ms kafka | [2024-04-25 12:40:08,693] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:11.434+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 159 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.245683615Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-04-25 12:40:08,693] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:11.437+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.247703082Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.019696ms kafka | [2024-04-25 12:40:09,232] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:11.537+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 158 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 grafana | logger=migrator t=2024-04-25T12:39:31.251488681Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-25 12:40:09,233] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:11.539+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 161 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 kafka | [2024-04-25 12:40:09,233] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:11.640+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 163 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.252585925Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.097864ms policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 policy-pap | [2024-04-25T12:40:11.640+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 160 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.26059158Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-04-25 12:40:09,233] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 grafana | logger=migrator t=2024-04-25T12:39:31.262413075Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.821145ms kafka | [2024-04-25 12:40:09,234] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:11.644+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:09,764] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:11.644+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-25T12:39:31.269573978Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:09,765] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:11.646+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms grafana | logger=migrator t=2024-04-25T12:39:31.270728134Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.154306ms policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 policy-pap | [2024-04-25T12:40:11.741+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 162 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.276858494Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-04-25 12:40:09,765] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:11.742+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 165 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.278429664Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.57083ms kafka | [2024-04-25 12:40:09,765] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 policy-pap | [2024-04-25T12:40:11.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 167 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.284606497Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-04-25 12:40:09,765] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 policy-pap | [2024-04-25T12:40:11.846+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 164 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.285718321Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.112844ms kafka | [2024-04-25 12:40:10,493] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 policy-pap | [2024-04-25T12:40:11.948+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 169 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.28947242Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:10,494] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:11.949+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 166 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.289945487Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=474.797µs policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:10,494] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.052+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 168 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.340500102Z level=info msg="Executing migration" id="alter kv_store.value to longtext" policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:10,494] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.054+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 171 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.340621713Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=122.802µs policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:10,494] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:12.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 173 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-25T12:39:31.348177902Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:11,262] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:12.160+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 170 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.359019484Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.839512ms policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:11,262] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:12.260+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 175 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.364308684Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 kafka | [2024-04-25 12:40:11,262] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.268+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 172 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.371156484Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.84764ms policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,262] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.364+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 177 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.375135647Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,262] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:12.371+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 174 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.375492051Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=354.914µs policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,757] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:12.468+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 179 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-25T12:39:31.3860134Z level=info msg="migrations completed" performed=548 skipped=0 duration=8.394833696s policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,758] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:12.473+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 176 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=sqlstore t=2024-04-25T12:39:31.398178949Z level=info msg="Created default admin" user=admin policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,758] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.570+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 181 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=sqlstore t=2024-04-25T12:39:31.398456313Z level=info msg="Created default organization" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,758] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:12.577+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 178 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=secrets t=2024-04-25T12:39:31.40273188Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:11,758] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=plugin.store t=2024-04-25T12:39:31.4218028Z level=info msg="Loading plugins..." policy-pap | [2024-04-25T12:40:12.674+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 183 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,084] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=local.finder t=2024-04-25T12:39:31.462594686Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-pap | [2024-04-25T12:40:12.681+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 180 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,085] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=plugin.store t=2024-04-25T12:39:31.462620046Z level=info msg="Plugins loaded" count=55 duration=40.817816ms policy-pap | [2024-04-25T12:40:12.776+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 185 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,085] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=query_data t=2024-04-25T12:39:31.465251441Z level=info msg="Query Service initialization" policy-pap | [2024-04-25T12:40:12.783+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 182 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,086] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=live.push_http t=2024-04-25T12:39:31.472439496Z level=info msg="Live Push Gateway initialization" policy-pap | [2024-04-25T12:40:12.879+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 187 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,086] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.migration t=2024-04-25T12:39:31.536471908Z level=info msg=Starting policy-pap | [2024-04-25T12:40:12.885+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 184 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,191] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.migration t=2024-04-25T12:39:31.537286959Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false policy-pap | [2024-04-25T12:40:12.982+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 189 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,192] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=ngalert.migration orgID=1 t=2024-04-25T12:39:31.53816154Z level=info msg="Migrating alerts for organisation" policy-pap | [2024-04-25T12:40:12.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 186 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 kafka | [2024-04-25 12:40:12,192] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=ngalert.migration orgID=1 t=2024-04-25T12:39:31.539380346Z level=info msg="Alerts found to migrate" alerts=0 policy-pap | [2024-04-25T12:40:13.086+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 191 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,192] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration t=2024-04-25T12:39:31.542630099Z level=info msg="Completed alerting migration" policy-pap | [2024-04-25T12:40:13.092+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 188 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,193] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.581652832Z level=info msg="Running in alternative execution of Error/NoData mode" policy-pap | [2024-04-25T12:40:13.189+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 193 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,227] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=infra.usagestats.collector t=2024-04-25T12:39:31.583393844Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 policy-pap | [2024-04-25T12:40:13.195+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 190 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,228] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=provisioning.datasources t=2024-04-25T12:39:31.585647374Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz policy-pap | [2024-04-25T12:40:13.291+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 195 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,228] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=provisioning.alerting t=2024-04-25T12:39:31.598573374Z level=info msg="starting to provision alerting" policy-pap | [2024-04-25T12:40:13.297+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 192 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,229] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=provisioning.alerting t=2024-04-25T12:39:31.598588285Z level=info msg="finished to provision alerting" policy-pap | [2024-04-25T12:40:13.394+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 197 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,229] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.598689406Z level=info msg="Warming state cache for startup" policy-pap | [2024-04-25T12:40:13.400+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 194 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,335] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.598901619Z level=info msg="State cache has been initialized" states=0 duration=210.143µs policy-pap | [2024-04-25T12:40:13.498+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 199 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,337] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=grafanaStorageLogger t=2024-04-25T12:39:31.598964079Z level=info msg="Storage starting" policy-pap | [2024-04-25T12:40:13.503+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 196 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,337] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T12:39:31.600223315Z level=info msg="Starting MultiOrg Alertmanager" policy-pap | [2024-04-25T12:40:13.602+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 201 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,337] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.scheduler t=2024-04-25T12:39:31.600273146Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 policy-pap | [2024-04-25T12:40:13.606+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 198 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,338] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ticker t=2024-04-25T12:39:31.600460858Z level=info msg=starting first_tick=2024-04-25T12:39:40Z policy-pap | [2024-04-25T12:40:13.704+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 203 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 kafka | [2024-04-25 12:40:12,550] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=http.server t=2024-04-25T12:39:31.603037203Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-pap | [2024-04-25T12:40:13.709+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 200 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 grafana | logger=provisioning.dashboard t=2024-04-25T12:39:31.689025333Z level=info msg="starting to provision dashboards" kafka | [2024-04-25 12:40:12,551] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:13.807+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 205 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 grafana | logger=plugins.update.checker t=2024-04-25T12:39:31.706375811Z level=info msg="Update check succeeded" duration=95.234932ms kafka | [2024-04-25 12:40:12,551] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:13.813+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 202 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 grafana | logger=grafana.update.checker t=2024-04-25T12:39:31.718030675Z level=info msg="Update check succeeded" duration=105.919023ms kafka | [2024-04-25 12:40:12,552] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:13.908+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 207 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=sqlstore.transactions t=2024-04-25T12:39:31.803072313Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:27 kafka | [2024-04-25 12:40:12,552] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:13.915+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 204 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=grafana-apiserver t=2024-04-25T12:39:32.17256562Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:27 kafka | [2024-04-25 12:40:12,652] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:14.012+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 209 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=grafana-apiserver t=2024-04-25T12:39:32.173112537Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 kafka | [2024-04-25 12:40:12,653] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:14.017+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 206 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=provisioning.dashboard t=2024-04-25T12:39:32.841126072Z level=info msg="finished to provision dashboards" policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 kafka | [2024-04-25 12:40:12,653] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:14.114+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 211 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=infra.usagestats t=2024-04-25T12:40:01.607204604Z level=info msg="Usage stats are ready to report" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 kafka | [2024-04-25 12:40:12,653] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:14.120+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 208 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 kafka | [2024-04-25 12:40:12,654] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.217+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 213 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:12,802] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.223+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 210 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:12,803] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.320+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 215 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:12,803] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.326+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 212 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:12,803] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.423+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 217 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:12,804] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.429+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 214 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,065] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.526+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 219 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,065] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.531+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 216 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,065] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.629+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 221 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,066] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 policy-pap | [2024-04-25T12:40:14.634+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 218 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,066] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 kafka | [2024-04-25 12:40:13,300] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.733+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 223 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,301] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.737+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 220 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,301] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.835+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 225 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,301] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.841+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 222 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,301] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 227 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,775] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:14.944+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 224 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,776] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:15.043+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 229 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,776] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:15.047+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 226 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,776] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:15.145+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 231 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,776] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:15.149+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 228 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,867] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 policy-pap | [2024-04-25T12:40:15.248+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 233 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,868] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.252+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 230 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,868] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.351+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 235 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,868] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.355+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 232 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,868] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.455+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 237 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.459+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 234 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,882] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.558+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 239 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,883] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.561+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 236 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,883] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.660+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 241 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,883] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 policy-pap | [2024-04-25T12:40:15.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 238 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:13,884] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 kafka | [2024-04-25 12:40:13,955] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:15.763+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 243 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 kafka | [2024-04-25 12:40:13,956] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:15.767+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 240 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 kafka | [2024-04-25 12:40:13,956] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:15.867+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 245 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:13,956] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:15.869+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 242 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:13,957] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:15.969+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 247 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,283] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:15.973+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 244 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,284] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:16.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 250 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,285] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.074+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 247 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,285] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.173+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 252 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,285] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:16.176+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 249 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,315] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:16.276+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 254 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,316] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:16.279+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 251 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,316] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.380+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 256 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,317] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.385+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 253 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,317] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,335] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:16.483+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 258 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 kafka | [2024-04-25 12:40:14,336] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:16.488+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 255 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:32 kafka | [2024-04-25 12:40:14,336] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.585+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 260 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:32 kafka | [2024-04-25 12:40:14,337] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.590+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 257 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241239221100u 1 2024-04-25 12:39:32 kafka | [2024-04-25 12:40:14,337] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:16.688+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 262 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,632] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:16.693+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 259 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,633] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:16.790+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 264 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 policy-pap | [2024-04-25T12:40:16.795+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 261 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,634] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.894+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 266 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,634] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:16.898+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 263 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,634] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:16.996+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 268 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 kafka | [2024-04-25 12:40:14,842] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:16.999+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 265 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-04-25 12:40:14,843] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:17.099+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 270 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:14,844] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.102+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 267 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:14,844] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.203+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 272 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:14,844] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:17.205+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 269 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:17.306+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 274 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,124] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:17.308+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 271 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,125] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:17.408+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 276 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:15,125] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.411+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 273 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,126] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.511+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 278 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,126] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(HOyl9LomSW2VRWzaH4p5QQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:17.515+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 275 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,219] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:17.612+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 280 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:15,220] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:17.617+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 277 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:17.715+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 282 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,220] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.720+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 279 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,220] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:17.815+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 284 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:15,220] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:17.824+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 281 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,702] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:17.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 286 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,702] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:17.926+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 283 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,703] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.020+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 288 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:15,703] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.027+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 285 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,703] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:18.124+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 290 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,853] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:18.130+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 287 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,853] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:18.227+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 292 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,854] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.233+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 289 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,854] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.331+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 294 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:15,854] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:18.335+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 291 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,004] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:18.433+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 296 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-25 12:40:16,005] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:18.445+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 293 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,005] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.535+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 298 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,006] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.547+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 295 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,006] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T12:40:18.637+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 300 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,239] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T12:40:18.649+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 297 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,239] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T12:40:18.741+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 302 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 12:40:16,239] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,239] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,240] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:16,708] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:16,709] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:16,709] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,710] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,710] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:16,787] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:16,788] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:16,788] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,788] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:16,788] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:17,214] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:17,215] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:17,215] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,216] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,216] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:17,506] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:17,508] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:17,508] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,508] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,509] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:17,691] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:17,691] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:17,691] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,691] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,692] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:17,819] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:17,820] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:17,821] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,821] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,821] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:17,991] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:17,992] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:17,992] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,992] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:17,992] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:18,294] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:18,295] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:18,295] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,295] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T12:40:18.751+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 299 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:18.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 304 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:18.853+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 301 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:18.946+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 306 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:18.955+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 303 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:19.050+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 308 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:19.055+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 305 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-04-25T12:40:19.154+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 310 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:19.159+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 307 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-25T12:40:19.267+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-04-25T12:40:19.268+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-04-25T12:40:19.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] (Re-)joining group policy-pap | [2024-04-25T12:40:19.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-04-25T12:40:19.298+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Request joining group due to: need to re-join with the given member-id: consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] (Re-)joining group policy-pap | [2024-04-25T12:40:22.323+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5', protocol='range'} policy-pap | [2024-04-25T12:40:22.325+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Successfully joined group with generation Generation{generationId=1, memberId='consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836', protocol='range'} policy-pap | [2024-04-25T12:40:22.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Finished assignment for group at generation 1: {consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-04-25T12:40:22.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-04-25T12:40:22.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Successfully synced group in generation Generation{generationId=1, memberId='consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836', protocol='range'} policy-pap | [2024-04-25T12:40:22.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-04-25T12:40:22.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5', protocol='range'} policy-pap | [2024-04-25T12:40:22.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-04-25T12:40:22.369+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-04-25T12:40:22.369+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-04-25T12:40:22.390+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-04-25T12:40:22.390+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-04-25T12:40:22.408+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-04-25T12:40:22.408+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-04-25T12:40:22.905+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-04-25T12:40:22.906+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} policy-pap | [2024-04-25T12:40:22.908+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} policy-pap | [2024-04-25T12:40:22.915+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting listener policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting timer policy-pap | [2024-04-25T12:40:23.367+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] policy-pap | [2024-04-25T12:40:23.368+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] policy-pap | [2024-04-25T12:40:23.368+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting enqueue policy-pap | [2024-04-25T12:40:23.369+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate started policy-pap | [2024-04-25T12:40:23.371+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-04-25T12:40:23.404+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} policy-pap | [2024-04-25T12:40:23.428+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-04-25T12:40:23.430+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} policy-pap | [2024-04-25T12:40:23.435+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping enqueue policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping timer policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping listener policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopped policy-pap | [2024-04-25T12:40:23.624+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 12:40:18,295] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:18,409] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:18,410] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:18,410] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,410] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,410] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:18,585] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:18,586] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:18,586] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,586] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,586] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:18,826] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 12:40:18,826] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 12:40:18,827] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,827] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 12:40:18,827] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-25 12:40:19,177] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,179] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.626+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate successful policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 start publishing next request policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting listener policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting timer policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting enqueue policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange started policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.645+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.645+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-04-25T12:40:23.653+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.654+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9c94d082-5dc3-41dd-b822-97664ab4caac policy-pap | [2024-04-25T12:40:23.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping enqueue policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping timer policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping listener policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopped policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange successful policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 start publishing next request policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting listener policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting timer policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7c8ff35c-cd2f-465e-9c85-bcb76f083b98, expireMs=1714048853681] policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting enqueue policy-pap | [2024-04-25T12:40:23.682+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.683+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate started policy-pap | [2024-04-25T12:40:23.696+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.697+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-25T12:40:23.698+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.698+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping enqueue policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping timer policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7c8ff35c-cd2f-465e-9c85-bcb76f083b98, expireMs=1714048853681] policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping listener policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopped policy-pap | [2024-04-25T12:40:23.709+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-25T12:40:23.710+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7c8ff35c-cd2f-465e-9c85-bcb76f083b98 policy-pap | [2024-04-25T12:40:23.712+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate successful policy-pap | [2024-04-25T12:40:23.712+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 has no more requests policy-pap | [2024-04-25T12:40:32.103+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2024-04-25T12:40:32.150+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-04-25T12:40:32.160+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-04-25T12:40:32.161+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-04-25T12:40:32.594+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup policy-pap | [2024-04-25T12:40:33.145+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup policy-pap | [2024-04-25T12:40:33.147+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup policy-pap | [2024-04-25T12:40:33.661+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup policy-pap | [2024-04-25T12:40:33.910+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-04-25T12:40:34.056+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-04-25T12:40:34.057+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup policy-pap | [2024-04-25T12:40:34.057+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup policy-pap | [2024-04-25T12:40:34.085+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T12:40:33Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T12:40:34Z, user=policyadmin)] policy-pap | [2024-04-25T12:40:34.785+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-pap | [2024-04-25T12:40:34.787+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup policy-pap | [2024-04-25T12:40:34.796+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T12:40:34Z, user=policyadmin)] policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2024-04-25T12:40:35.194+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T12:40:35Z, user=policyadmin)] policy-pap | [2024-04-25T12:40:53.368+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] policy-pap | [2024-04-25T12:40:53.629+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] policy-pap | [2024-04-25T12:40:55.908+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-04-25T12:40:55.910+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-04-25T12:42:01.116+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 12:40:19,192] INFO [Broker id=1] Finished LeaderAndIsr request in 15241ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-04-25 12:40:19,195] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hlyPC_3zQpGmePqsd4AOeA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=HOyl9LomSW2VRWzaH4p5QQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,204] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 12:40:19,205] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 53d3b957-3026-4843-bc4f-55d426241089 in Empty state. Created a new member id consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 in Empty state. Created a new member id consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 in state PreparingRebalance with old generation 0 (__consumer_offsets-30) (reason: Adding new member consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group 53d3b957-3026-4843-bc4f-55d426241089 in state PreparingRebalance with old generation 0 (__consumer_offsets-1) (reason: Adding new member consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,321] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,324] INFO [GroupCoordinator 1]: Stabilized group 53d3b957-3026-4843-bc4f-55d426241089 generation 1 (__consumer_offsets-1) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,325] INFO [GroupCoordinator 1]: Stabilized group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 generation 1 (__consumer_offsets-30) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 for group 53d3b957-3026-4843-bc4f-55d426241089 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e for group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping grafana ... Stopping kafka ... Stopping policy-api ... Stopping simulator ... Stopping mariadb ... Stopping prometheus ... Stopping zookeeper ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing grafana ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing simulator ... Removing mariadb ... Removing prometheus ... Removing zookeeper ... Removing policy-db-migrator ... done Removing policy-apex-pdp ... done Removing mariadb ... done Removing policy-api ... done Removing grafana ... done Removing policy-pap ... done Removing simulator ... done Removing kafka ... done Removing prometheus ... done Removing zookeeper ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.IfKGrR3aFZ ]] + rsync -av /tmp/tmp.IfKGrR3aFZ/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,707 bytes received 95 bytes 1,837,604.00 bytes/sec total size is 918,161 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2080 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2438396946621969292.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins336890825422044275.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13023940649407330641.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10993113376887587068.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config16576222324122791650tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13937674948363116199.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3155689500288050888.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16385538406870033277.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16264651673967317711.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins8743828596054953600.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1662 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-26122 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 886 25335 0 5944 30824 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:5e:a0:f1 brd ff:ff:ff:ff:ff:ff inet 10.30.107.33/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85771sec preferred_lft 85771sec inet6 fe80::f816:3eff:fe5e:a0f1/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:66:91:ec:39 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26122) 04/25/24 _x86_64_ (8 CPU) 12:32:53 LINUX RESTART (8 CPU) 12:33:01 tps rtps wtps bread/s bwrtn/s 12:34:03 116.20 70.05 46.14 5272.19 48403.13 12:35:01 84.19 18.17 66.02 1055.13 20018.20 12:36:01 86.84 14.08 72.76 1127.97 21832.42 12:37:01 76.68 10.00 66.68 1720.75 19406.09 12:38:01 82.21 0.05 82.16 5.60 45877.19 12:39:01 121.85 0.07 121.78 2.80 85055.82 12:40:01 319.78 11.56 308.22 761.04 33597.30 12:41:01 22.16 0.27 21.90 12.53 13334.96 12:42:01 11.26 0.02 11.25 2.93 13508.72 12:43:01 67.76 1.22 66.54 103.32 16506.55 Average: 98.94 12.53 86.41 1006.33 31791.76 12:33:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:34:03 30387324 31720188 2551896 7.75 45236 1611704 1444140 4.25 813924 1472748 47432 12:35:01 30136624 31710780 2802596 8.51 68908 1816184 1406212 4.14 858012 1652900 158200 12:36:01 29839608 31666088 3099612 9.41 82780 2042332 1494828 4.40 924852 1856908 140556 12:37:01 28938772 31646064 4000448 12.14 99356 2880888 1378328 4.06 1011048 2618020 769812 12:38:01 27212340 31638208 5726880 17.39 128524 4481992 1434916 4.22 1041392 4217196 1313300 12:39:01 26019896 31631780 6919324 21.01 139440 5608020 1504384 4.43 1058004 5342040 349144 12:40:01 23965816 29736608 8973404 27.24 154980 5733656 8498204 25.00 3115048 5264520 588 12:41:01 23803928 29580212 9135292 27.73 156164 5735880 8822544 25.96 3285380 5249112 308 12:42:01 23811812 29589616 9127408 27.71 156324 5737096 8852616 26.05 3276316 5249308 892 12:43:01 25973132 31592536 6966088 21.15 158000 5595660 1524084 4.48 1322696 5107484 29352 Average: 27008925 31051208 5930295 18.00 118971 4124341 3636026 10.70 1670667 3803024 280958 12:33:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:34:03 ens3 327.23 227.21 877.41 57.26 0.00 0.00 0.00 0.00 12:34:03 lo 1.07 1.07 0.10 0.10 0.00 0.00 0.00 0.00 12:34:03 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:35:01 ens3 46.68 32.65 743.14 7.03 0.00 0.00 0.00 0.00 12:35:01 lo 1.65 1.65 0.18 0.18 0.00 0.00 0.00 0.00 12:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:36:01 ens3 41.16 27.75 573.28 8.85 0.00 0.00 0.00 0.00 12:36:01 lo 0.53 0.53 0.06 0.06 0.00 0.00 0.00 0.00 12:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:01 ens3 147.11 96.85 3907.67 14.62 0.00 0.00 0.00 0.00 12:37:01 lo 5.53 5.53 0.52 0.52 0.00 0.00 0.00 0.00 12:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:38:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:38:01 ens3 641.18 269.47 13330.08 21.44 0.00 0.00 0.00 0.00 12:38:01 lo 3.33 3.33 0.35 0.35 0.00 0.00 0.00 0.00 12:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:39:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:39:01 ens3 405.73 186.95 12383.97 13.93 0.00 0.00 0.00 0.00 12:39:01 lo 4.27 4.27 0.40 0.40 0.00 0.00 0.00 0.00 12:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:40:01 vethd23a50d 1.38 2.00 0.16 0.19 0.00 0.00 0.00 0.00 12:40:01 br-2592e41f6506 0.82 0.65 0.06 0.30 0.00 0.00 0.00 0.00 12:40:01 veth74c81d5 0.92 1.13 0.06 0.06 0.00 0.00 0.00 0.00 12:40:01 veth2aed65e 0.00 0.35 0.00 0.02 0.00 0.00 0.00 0.00 12:41:01 vethd23a50d 36.64 39.96 4.47 4.62 0.00 0.00 0.00 0.00 12:41:01 br-2592e41f6506 2.05 2.43 1.82 1.74 0.00 0.00 0.00 0.00 12:41:01 veth74c81d5 18.80 11.90 2.34 1.60 0.00 0.00 0.00 0.00 12:41:01 veth2aed65e 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 12:42:01 vethd23a50d 0.15 0.33 0.01 0.02 0.00 0.00 0.00 0.00 12:42:01 br-2592e41f6506 1.38 1.60 0.11 0.15 0.00 0.00 0.00 0.00 12:42:01 veth74c81d5 3.18 4.67 0.66 0.36 0.00 0.00 0.00 0.00 12:42:01 veth2aed65e 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 12:43:01 ens3 1710.46 916.23 31885.58 163.82 0.00 0.00 0.00 0.00 12:43:01 lo 35.59 35.59 6.27 6.27 0.00 0.00 0.00 0.00 12:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 171.02 91.41 3198.06 16.38 0.00 0.00 0.00 0.00 Average: lo 3.27 3.27 0.60 0.60 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26122) 04/25/24 _x86_64_ (8 CPU) 12:32:53 LINUX RESTART (8 CPU) 12:33:01 CPU %user %nice %system %iowait %steal %idle 12:34:03 all 5.31 0.00 0.98 9.90 0.04 83.77 12:34:03 0 6.37 0.00 0.62 2.87 0.03 90.11 12:34:03 1 2.74 0.00 0.53 0.22 0.02 96.50 12:34:03 2 4.11 0.00 0.87 0.68 0.02 94.32 12:34:03 3 7.28 0.00 1.22 1.30 0.03 90.16 12:34:03 4 3.77 0.00 2.04 29.93 0.05 64.21 12:34:03 5 5.44 0.00 0.57 42.29 0.05 51.65 12:34:03 6 8.81 0.00 1.05 1.10 0.03 89.00 12:34:03 7 3.98 0.00 0.94 0.85 0.03 94.20 12:35:01 all 8.26 0.00 0.64 4.33 0.03 86.75 12:35:01 0 6.06 0.00 0.60 0.02 0.00 93.32 12:35:01 1 3.00 0.00 0.29 0.02 0.02 96.67 12:35:01 2 0.59 0.00 0.22 0.41 0.00 98.78 12:35:01 3 0.93 0.00 0.38 0.55 0.02 98.12 12:35:01 4 2.26 0.00 0.26 0.12 0.05 97.30 12:35:01 5 10.51 0.00 0.48 28.53 0.03 60.45 12:35:01 6 13.06 0.00 0.64 2.90 0.03 83.37 12:35:01 7 29.64 0.00 2.26 2.12 0.05 65.92 12:36:01 all 6.62 0.00 0.41 6.75 0.03 86.19 12:36:01 0 0.05 0.00 0.03 0.02 0.00 99.90 12:36:01 1 5.96 0.00 0.37 0.05 0.02 93.60 12:36:01 2 22.63 0.00 0.68 4.96 0.03 71.69 12:36:01 3 6.27 0.00 0.47 3.35 0.02 89.90 12:36:01 4 3.74 0.00 0.32 8.06 0.08 87.80 12:36:01 5 3.62 0.00 0.40 7.02 0.02 88.94 12:36:01 6 9.73 0.00 0.85 30.15 0.02 59.25 12:36:01 7 0.97 0.00 0.13 0.47 0.00 98.43 12:37:01 all 5.60 0.00 1.56 8.37 0.03 84.43 12:37:01 0 2.91 0.00 1.20 0.02 0.02 95.86 12:37:01 1 6.28 0.00 1.93 0.07 0.03 91.69 12:37:01 2 5.48 0.00 1.42 1.72 0.03 91.35 12:37:01 3 7.85 0.00 1.36 18.09 0.03 72.68 12:37:01 4 7.52 0.00 1.96 39.26 0.05 51.21 12:37:01 5 5.33 0.00 1.37 0.08 0.03 93.18 12:37:01 6 5.48 0.00 1.21 7.45 0.03 85.83 12:37:01 7 3.94 0.00 2.04 0.40 0.03 93.58 12:38:01 all 5.85 0.00 2.50 12.41 0.04 79.21 12:38:01 0 4.33 0.00 3.37 0.00 0.03 92.27 12:38:01 1 5.65 0.00 1.84 0.02 0.03 92.45 12:38:01 2 5.54 0.00 2.53 2.01 0.03 89.88 12:38:01 3 5.91 0.00 2.49 22.55 0.03 69.02 12:38:01 4 5.56 0.00 2.14 45.47 0.05 46.78 12:38:01 5 6.38 0.00 1.89 2.39 0.02 89.32 12:38:01 6 6.78 0.00 3.49 24.79 0.05 64.88 12:38:01 7 6.67 0.00 2.21 2.25 0.03 88.83 12:39:01 all 4.99 0.00 2.16 10.39 0.04 82.42 12:39:01 0 3.79 0.00 1.96 0.34 0.02 93.90 12:39:01 1 4.85 0.00 1.59 0.07 0.05 93.44 12:39:01 2 6.66 0.00 1.96 0.97 0.03 90.37 12:39:01 3 5.31 0.00 2.00 0.55 0.05 92.09 12:39:01 4 5.32 0.00 2.04 6.81 0.03 85.80 12:39:01 5 6.00 0.00 2.07 4.91 0.05 86.97 12:39:01 6 2.98 0.00 2.68 47.40 0.05 46.89 12:39:01 7 5.04 0.00 2.96 22.21 0.03 69.75 12:40:01 all 23.55 0.00 3.12 8.69 0.08 64.55 12:40:01 0 15.52 0.00 2.75 8.10 0.08 73.55 12:40:01 1 23.64 0.00 2.78 1.49 0.07 72.02 12:40:01 2 22.77 0.00 3.59 3.43 0.08 70.13 12:40:01 3 22.88 0.00 2.80 3.08 0.08 71.16 12:40:01 4 26.97 0.00 3.33 33.17 0.10 36.43 12:40:01 5 23.88 0.00 3.01 14.49 0.07 58.56 12:40:01 6 25.09 0.00 3.32 3.63 0.08 67.88 12:40:01 7 27.66 0.00 3.40 2.23 0.08 66.63 12:41:01 all 10.01 0.00 1.02 6.35 0.06 82.57 12:41:01 0 12.80 0.00 1.45 2.79 0.07 82.89 12:41:01 1 7.94 0.00 0.89 21.27 0.07 69.83 12:41:01 2 8.84 0.00 0.99 1.92 0.07 88.19 12:41:01 3 9.91 0.00 0.97 3.24 0.05 85.83 12:41:01 4 10.39 0.00 0.89 3.39 0.08 85.25 12:41:01 5 10.42 0.00 1.02 1.79 0.05 86.72 12:41:01 6 9.89 0.00 1.07 15.04 0.05 73.96 12:41:01 7 9.85 0.00 0.89 1.47 0.05 87.74 12:42:01 all 0.77 0.00 0.19 2.12 0.03 96.88 12:42:01 0 0.68 0.00 0.25 0.32 0.05 98.70 12:42:01 1 1.17 0.00 0.20 16.19 0.02 82.42 12:42:01 2 0.70 0.00 0.15 0.13 0.02 99.00 12:42:01 3 0.55 0.00 0.18 0.00 0.03 99.23 12:42:01 4 1.07 0.00 0.17 0.20 0.05 98.51 12:42:01 5 0.85 0.00 0.20 0.13 0.03 98.78 12:42:01 6 0.32 0.00 0.13 0.00 0.02 99.53 12:42:01 7 0.82 0.00 0.25 0.00 0.03 98.90 12:43:01 all 5.62 0.00 0.69 2.72 0.04 90.93 12:43:01 0 1.22 0.00 0.65 1.82 0.02 96.29 12:43:01 1 2.82 0.00 0.62 14.02 0.03 82.50 12:43:01 2 2.60 0.00 0.72 0.18 0.03 96.46 12:43:01 3 1.31 0.00 0.52 2.01 0.03 96.12 12:43:01 4 15.27 0.00 0.80 1.44 0.03 82.46 12:43:01 5 16.21 0.00 0.95 1.00 0.07 81.77 12:43:01 6 1.45 0.00 0.57 0.18 0.03 97.76 12:43:01 7 4.03 0.00 0.67 1.10 0.03 94.16 Average: all 7.65 0.00 1.33 7.21 0.04 83.77 Average: 0 5.37 0.00 1.29 1.63 0.03 91.68 Average: 1 6.41 0.00 1.11 5.35 0.04 87.09 Average: 2 8.01 0.00 1.31 1.65 0.04 88.99 Average: 3 6.84 0.00 1.24 5.48 0.04 86.40 Average: 4 8.20 0.00 1.40 16.79 0.06 73.55 Average: 5 8.85 0.00 1.20 10.20 0.04 79.70 Average: 6 8.34 0.00 1.50 13.27 0.04 76.85 Average: 7 9.19 0.00 1.57 3.30 0.04 85.90