Started by upstream project "policy-pap-master-merge-java" build number 353 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137784 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-35298 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-E7wcoAdLKSJh/agent.2108 SSH_AGENT_PID=2109 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2548396389913147912.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2548396389913147912.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6895481086492396429.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-yR5Q lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yR5Q/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-yR5Q/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.92 botocore==1.34.92 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins4512169905068633175.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins6625909559559670169.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.y8x6NHPmxk ++ echo ROBOT_VENV=/tmp/tmp.y8x6NHPmxk +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.y8x6NHPmxk ++ source /tmp/tmp.y8x6NHPmxk/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.y8x6NHPmxk +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.y8x6NHPmxk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.y8x6NHPmxk) ' '!=' x ']' +++ PS1='(tmp.y8x6NHPmxk) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.y8x6NHPmxk/src/onap ++ rm -rf /tmp/tmp.y8x6NHPmxk/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.y8x6NHPmxk/bin/activate + '[' -z /tmp/tmp.y8x6NHPmxk/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.y8x6NHPmxk/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.y8x6NHPmxk ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.y8x6NHPmxk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.y8x6NHPmxk) ' ++ '[' 'x(tmp.y8x6NHPmxk) ' '!=' x ']' ++ PS1='(tmp.y8x6NHPmxk) (tmp.y8x6NHPmxk) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.L7L2qXgREO + cd /tmp/tmp.L7L2qXgREO + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:1dd97a95f6bcae15ec35d9d2c6a96d034d97ff5ce2273cf42b1c2549092a92a2 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:eb3daea3b81a46c89d44f314f21edba0e1d1b0915fd599185530e673a4f3e30f Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:2982103b8b97bcecc18fa7674e9a0a7ea287a248a53ed6e0d2081bc012bf4324 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating zookeeper ... Creating prometheus ... Creating simulator ... Creating mariadb ... Creating zookeeper ... done Creating kafka ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating kafka ... done Creating mariadb ... done Creating policy-db-migrator ... Creating simulator ... done Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 12 seconds policy-db-migrator Up 13 seconds grafana Up 17 seconds kafka Up 16 seconds simulator Up 14 seconds mariadb Up 15 seconds prometheus Up 18 seconds zookeeper Up 19 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 17 seconds grafana Up 23 seconds kafka Up 21 seconds simulator Up 19 seconds mariadb Up 20 seconds prometheus Up 23 seconds zookeeper Up 24 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 22 seconds grafana Up 28 seconds kafka Up 26 seconds simulator Up 24 seconds mariadb Up 25 seconds prometheus Up 28 seconds zookeeper Up 29 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 27 seconds grafana Up 33 seconds kafka Up 31 seconds simulator Up 29 seconds mariadb Up 30 seconds prometheus Up 33 seconds zookeeper Up 34 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 38 seconds kafka Up 37 seconds simulator Up 34 seconds mariadb Up 35 seconds prometheus Up 39 seconds zookeeper Up 40 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds grafana Up 43 seconds kafka Up 42 seconds simulator Up 39 seconds mariadb Up 40 seconds prometheus Up 44 seconds zookeeper Up 45 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo hxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:53:35 up 5 min, 0 users, load average: 3.89, 1.77, 0.73 Tasks: 203 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.1 us, 2.2 sy, 0.0 ni, 80.5 id, 6.1 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 5.9G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds grafana Up 43 seconds kafka Up 42 seconds simulator Up 40 seconds mariadb Up 41 seconds prometheus Up 44 seconds zookeeper Up 45 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS f1c713e6cd87 policy-apex-pdp 159.30% 188.9MiB / 31.41GiB 0.59% 7.3kB / 7.12kB 0B / 0B 48 eedb9c57657f policy-pap 2.00% 673.1MiB / 31.41GiB 2.09% 32.3kB / 34.2kB 0B / 149MB 62 72dbdb2cd170 policy-api 0.10% 525MiB / 31.41GiB 1.63% 988kB / 647kB 0B / 0B 53 a237cb4b03ea grafana 0.06% 56.56MiB / 31.41GiB 0.18% 18.8kB / 3.66kB 0B / 24.8MB 18 324f31b114cb kafka 9.01% 372MiB / 31.41GiB 1.16% 71.7kB / 75.4kB 0B / 475kB 83 68165d805cf0 simulator 0.07% 121MiB / 31.41GiB 0.38% 1.23kB / 0B 0B / 0B 76 b28125d6af20 mariadb 0.02% 102MiB / 31.41GiB 0.32% 936kB / 1.18MB 11MB / 50.8MB 37 4c6d2a6b84ae prometheus 0.00% 18.11MiB / 31.41GiB 0.06% 1.32kB / 0B 0B / 0B 13 bd3eb536d2b4 zookeeper 4.61% 98.75MiB / 31.41GiB 0.31% 58.4kB / 51kB 229kB / 385kB 60 + echo + cd /tmp/tmp.L7L2qXgREO + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' pap-slas.robot' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.L7L2qXgREO/output.xml Log: /tmp/tmp.L7L2qXgREO/log.html Report: /tmp/tmp.L7L2qXgREO/report.html + RESULT=0 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:55:24 up 7 min, 0 users, load average: 0.79, 1.33, 0.68 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.5 us, 1.8 sy, 0.0 ni, 83.9 id, 4.7 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS f1c713e6cd87 policy-apex-pdp 1.32% 180.2MiB / 31.41GiB 0.56% 56.5kB / 90.9kB 0B / 0B 52 eedb9c57657f policy-pap 0.73% 543.9MiB / 31.41GiB 1.69% 2.47MB / 1.04MB 0B / 149MB 66 72dbdb2cd170 policy-api 0.09% 592.9MiB / 31.41GiB 1.84% 2.45MB / 1.1MB 0B / 0B 56 a237cb4b03ea grafana 0.03% 55.24MiB / 31.41GiB 0.17% 19.7kB / 4.57kB 0B / 24.8MB 18 324f31b114cb kafka 1.09% 401MiB / 31.41GiB 1.25% 241kB / 217kB 0B / 573kB 85 68165d805cf0 simulator 0.07% 121.1MiB / 31.41GiB 0.38% 1.58kB / 0B 0B / 0B 78 b28125d6af20 mariadb 0.01% 103.2MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 51MB 28 4c6d2a6b84ae prometheus 0.08% 23.95MiB / 31.41GiB 0.07% 181kB / 10.8kB 0B / 0B 13 bd3eb536d2b4 zookeeper 0.08% 98.5MiB / 31.41GiB 0.31% 61.4kB / 52.6kB 229kB / 385kB 60 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, simulator, mariadb, prometheus, zookeeper grafana | logger=settings t=2024-04-26T08:52:52.033420224Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-26T08:52:52Z grafana | logger=settings t=2024-04-26T08:52:52.033648315Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-26T08:52:52.033657795Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-26T08:52:52.033663655Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-26T08:52:52.033667005Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-26T08:52:52.033671916Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-26T08:52:52.033674816Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-26T08:52:52.033678966Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-26T08:52:52.033682586Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-26T08:52:52.033686216Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-26T08:52:52.033689537Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-26T08:52:52.033695117Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-26T08:52:52.033698987Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-26T08:52:52.033704517Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-26T08:52:52.033707847Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-26T08:52:52.033711418Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-26T08:52:52.033714568Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-26T08:52:52.033718128Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-26T08:52:52.033721398Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-26T08:52:52.034016602Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-26T08:52:52.034038103Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-26T08:52:52.034737746Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-26T08:52:52.035847289Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-26T08:52:52.036741212Z level=info msg="Migration successfully executed" id="create migration_log table" duration=893.953µs grafana | logger=migrator t=2024-04-26T08:52:52.040481169Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-26T08:52:52.041050337Z level=info msg="Migration successfully executed" id="create user table" duration=566.148µs grafana | logger=migrator t=2024-04-26T08:52:52.047126336Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-26T08:52:52.048370275Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.243908ms grafana | logger=migrator t=2024-04-26T08:52:52.052157435Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-26T08:52:52.053413995Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.256709ms grafana | logger=migrator t=2024-04-26T08:52:52.057249867Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-26T08:52:52.057920179Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=670.512µs grafana | logger=migrator t=2024-04-26T08:52:52.064827197Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-26T08:52:52.065904319Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.082662ms grafana | logger=migrator t=2024-04-26T08:52:52.069947951Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-26T08:52:52.074064426Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.113185ms grafana | logger=migrator t=2024-04-26T08:52:52.077816116Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-26T08:52:52.078649775Z level=info msg="Migration successfully executed" id="create user table v2" duration=822.759µs grafana | logger=migrator t=2024-04-26T08:52:52.084625189Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-26T08:52:52.085714821Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.089202ms grafana | logger=migrator t=2024-04-26T08:52:52.089558424Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-26T08:52:52.090596013Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.037289ms grafana | logger=migrator t=2024-04-26T08:52:52.094272358Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-26T08:52:52.094638016Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=365.508µs grafana | logger=migrator t=2024-04-26T08:52:52.098294949Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-26T08:52:52.098776422Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=481.673µs grafana | logger=migrator t=2024-04-26T08:52:52.107681345Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-26T08:52:52.109386367Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.704442ms grafana | logger=migrator t=2024-04-26T08:52:52.112703574Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-26T08:52:52.112740286Z level=info msg="Migration successfully executed" id="Update user table charset" duration=37.612µs grafana | logger=migrator t=2024-04-26T08:52:52.11619128Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-26T08:52:52.117973535Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.785775ms grafana | logger=migrator t=2024-04-26T08:52:52.121423089Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-26T08:52:52.121694862Z level=info msg="Migration successfully executed" id="Add missing user data" duration=271.933µs grafana | logger=migrator t=2024-04-26T08:52:52.12879468Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-04-26T08:52:52.13046451Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.67404ms grafana | logger=migrator t=2024-04-26T08:52:52.133924724Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-26T08:52:52.134985634Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.0602ms grafana | logger=migrator t=2024-04-26T08:52:52.138327343Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-26T08:52:52.139445007Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.117154ms grafana | logger=migrator t=2024-04-26T08:52:52.144947158Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-26T08:52:52.155581494Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.637346ms grafana | logger=migrator t=2024-04-26T08:52:52.158997097Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-26T08:52:52.159847877Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=853.341µs grafana | logger=migrator t=2024-04-26T08:52:52.1634868Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-26T08:52:52.163684599Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=202.71µs grafana | logger=migrator t=2024-04-26T08:52:52.166811739Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-26T08:52:52.167522832Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=710.334µs grafana | logger=migrator t=2024-04-26T08:52:52.173657774Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-26T08:52:52.174139766Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=482.202µs grafana | logger=migrator t=2024-04-26T08:52:52.177888655Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-26T08:52:52.179182657Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.293761ms grafana | logger=migrator t=2024-04-26T08:52:52.183160145Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-26T08:52:52.183862059Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=701.684µs grafana | logger=migrator t=2024-04-26T08:52:52.190023282Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-26T08:52:52.191142895Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.119683ms grafana | logger=migrator t=2024-04-26T08:52:52.194761348Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-26T08:52:52.195893511Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.131293ms grafana | logger=migrator t=2024-04-26T08:52:52.199140426Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-26T08:52:52.199845419Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=704.583µs grafana | logger=migrator t=2024-04-26T08:52:52.205467296Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-26T08:52:52.205502448Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=36.582µs grafana | logger=migrator t=2024-04-26T08:52:52.210059015Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-26T08:52:52.211057402Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=998.087µs grafana | logger=migrator t=2024-04-26T08:52:52.214494696Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-26T08:52:52.215184919Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=689.863µs grafana | logger=migrator t=2024-04-26T08:52:52.218251005Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-26T08:52:52.218879664Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=626.58µs grafana | logger=migrator t=2024-04-26T08:52:52.224097513Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-26T08:52:52.225065078Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=975.546µs grafana | logger=migrator t=2024-04-26T08:52:52.228750544Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-26T08:52:52.233580253Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.828309ms grafana | logger=migrator t=2024-04-26T08:52:52.237339493Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-26T08:52:52.238156741Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=816.998µs grafana | logger=migrator t=2024-04-26T08:52:52.243580529Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-26T08:52:52.244700702Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.120023ms grafana | logger=migrator t=2024-04-26T08:52:52.248343146Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-26T08:52:52.249468749Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.125353ms grafana | logger=migrator t=2024-04-26T08:52:52.252903222Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-26T08:52:52.253624187Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=720.725µs grafana | logger=migrator t=2024-04-26T08:52:52.259021063Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-26T08:52:52.260142507Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.117694ms grafana | logger=migrator t=2024-04-26T08:52:52.264087334Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-26T08:52:52.264664442Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=576.808µs grafana | logger=migrator t=2024-04-26T08:52:52.26839629Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-26T08:52:52.268888473Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=492.193µs grafana | logger=migrator t=2024-04-26T08:52:52.27428945Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" mariadb | 2024-04-26 08:52:54+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-26 08:52:54+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-04-26 08:52:54+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-26 08:52:54+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-04-26 8:52:54 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-26 8:52:54 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-26 8:52:54 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-04-26 08:52:56+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-04-26 08:52:56+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-04-26 08:52:56+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-04-26 8:52:56 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-26 8:52:56 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-04-26 8:52:56 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-26 8:52:56 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-26 8:52:56 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-04-26 8:52:56 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-26 8:52:56 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-26 8:52:56 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-26 8:52:56 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-26 8:52:57 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-04-26 08:52:57+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-04-26 08:52:59+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-04-26 08:52:59+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-04-26 08:52:59+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-04-26 08:52:59+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | grafana | logger=migrator t=2024-04-26T08:52:52.274672338Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=385.788µs grafana | logger=migrator t=2024-04-26T08:52:52.277827428Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-26T08:52:52.278782314Z level=info msg="Migration successfully executed" id="create star table" duration=954.676µs grafana | logger=migrator t=2024-04-26T08:52:52.282644157Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-26T08:52:52.283794722Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.149935ms grafana | logger=migrator t=2024-04-26T08:52:52.287572172Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-26T08:52:52.288294006Z level=info msg="Migration successfully executed" id="create org table v1" duration=721.594µs grafana | logger=migrator t=2024-04-26T08:52:52.29679569Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-26T08:52:52.29805119Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.254739ms grafana | logger=migrator t=2024-04-26T08:52:52.301579098Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-26T08:52:52.302655959Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.076661ms grafana | logger=migrator t=2024-04-26T08:52:52.306663299Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-26T08:52:52.307395895Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=732.586µs grafana | logger=migrator t=2024-04-26T08:52:52.310632309Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-26T08:52:52.311380364Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=747.766µs grafana | logger=migrator t=2024-04-26T08:52:52.317110586Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-26T08:52:52.317891623Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=780.167µs grafana | logger=migrator t=2024-04-26T08:52:52.321081416Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-26T08:52:52.321128258Z level=info msg="Migration successfully executed" id="Update org table charset" duration=47.912µs grafana | logger=migrator t=2024-04-26T08:52:52.323835307Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-26T08:52:52.323870748Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=40.512µs grafana | logger=migrator t=2024-04-26T08:52:52.32727392Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-26T08:52:52.327552383Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=278.623µs grafana | logger=migrator t=2024-04-26T08:52:52.332894537Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-26T08:52:52.333649663Z level=info msg="Migration successfully executed" id="create dashboard table" duration=753.506µs grafana | logger=migrator t=2024-04-26T08:52:52.336974621Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-26T08:52:52.33778768Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=812.769µs grafana | logger=migrator t=2024-04-26T08:52:52.341070146Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-26T08:52:52.342325966Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.255239ms grafana | logger=migrator t=2024-04-26T08:52:52.34597115Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-26T08:52:52.346644841Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=674.541µs grafana | logger=migrator t=2024-04-26T08:52:52.352528711Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-26T08:52:52.353334479Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=800.388µs grafana | logger=migrator t=2024-04-26T08:52:52.356466308Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-26T08:52:52.357148401Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=681.883µs grafana | logger=migrator t=2024-04-26T08:52:52.360117602Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-26T08:52:52.365181913Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.06394ms grafana | logger=migrator t=2024-04-26T08:52:52.371951525Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-26T08:52:52.372771384Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=819.519µs grafana | logger=migrator t=2024-04-26T08:52:52.376390346Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-26T08:52:52.377556592Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.168097ms grafana | logger=migrator t=2024-04-26T08:52:52.381501499Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-26T08:52:52.382497557Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=996.648µs grafana | logger=migrator t=2024-04-26T08:52:52.38803911Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-26T08:52:52.388367576Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=328.326µs grafana | logger=migrator t=2024-04-26T08:52:52.391559838Z level=info msg="Executing migration" id="drop table dashboard_v1" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" grafana | logger=migrator t=2024-04-26T08:52:52.392361686Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=802.748µs mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-apex-pdp | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-04-26T08:52:52.399735517Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql policy-api | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.4:3306) open policy-apex-pdp | Waiting for kafka port 9092... grafana | logger=migrator t=2024-04-26T08:52:52.399832311Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=97.435µs grafana | logger=migrator t=2024-04-26T08:52:52.403567589Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-db-migrator | Waiting for mariadb port 3306... kafka | ===> Configuring ... mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp policy-api | mariadb (172.17.0.4:3306) open prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-apex-pdp | kafka (172.17.0.6:9092) open policy-pap | Waiting for mariadb port 3306... simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=migrator t=2024-04-26T08:52:52.406862415Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.293957ms zookeeper | ===> User policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | Running in Zookeeper mode... mariadb | policy-api | Waiting for policy-db-migrator port 6824... prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" policy-apex-pdp | Waiting for pap port 6969... policy-pap | mariadb (172.17.0.4:3306) open simulator | overriding logback.xml grafana | logger=migrator t=2024-04-26T08:52:52.410291808Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ===> Running preflight checks ... mariadb | 2024-04-26 08:53:00+00:00 [Note] [Entrypoint]: Stopping temporary server policy-api | policy-db-migrator (172.17.0.8:6824) open prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" policy-apex-pdp | pap (172.17.0.10:6969) open policy-pap | Waiting for kafka port 9092... simulator | 2024-04-26 08:52:55,719 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=migrator t=2024-04-26T08:52:52.412029151Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.737163ms zookeeper | ===> Configuring ... policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ===> Check if /var/lib/kafka/data is writable ... mariadb | 2024-04-26 8:53:00 0 [Note] mariadbd (initiated by: unknown): Normal shutdown policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-pap | kafka (172.17.0.6:9092) open simulator | 2024-04-26 08:52:55,796 INFO org.onap.policy.models.simulators starting zookeeper | ===> Running preflight checks ... policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-04-26T08:52:52.41558915Z level=info msg="Executing migration" id="Add column gnetId in dashboard" kafka | ===> Check if Zookeeper is healthy ... mariadb | 2024-04-26 8:53:00 0 [Note] InnoDB: FTS optimize thread exiting. policy-api | prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" policy-apex-pdp | [2024-04-26T08:53:35.630+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-pap | Waiting for api port 6969... simulator | 2024-04-26 08:52:55,796 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-04-26T08:52:52.417337894Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.746053ms kafka | [2024-04-26 08:52:57,485] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:00 0 [Note] InnoDB: Starting shutdown... policy-api | . ____ _ __ _ _ prometheus | ts=2024-04-26T08:52:51.098Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-apex-pdp | [2024-04-26T08:53:35.821+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | api (172.17.0.9:6969) open simulator | 2024-04-26 08:52:55,996 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! grafana | logger=migrator t=2024-04-26T08:52:52.423040965Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" kafka | [2024-04-26 08:52:57,485] INFO Client environment:host.name=324f31b114cb (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:00 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-04-26T08:52:51.102Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-apex-pdp | allow.auto.create.topics = true policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml simulator | 2024-04-26 08:52:55,997 INFO org.onap.policy.models.simulators starting A&AI simulator zookeeper | ===> Launching ... policy-db-migrator | 321 blocks grafana | logger=migrator t=2024-04-26T08:52:52.423807071Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=766.016µs kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:00 0 [Note] InnoDB: Buffer pool(s) dump completed at 240426 8:53:00 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-04-26T08:52:51.103Z caller=main.go:1129 level=info msg="Starting TSDB ..." policy-apex-pdp | auto.commit.interval.ms = 5000 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json simulator | 2024-04-26 08:52:56,120 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | ===> Launching zookeeper ... policy-db-migrator | Preparing upgrade release version: 0800 grafana | logger=migrator t=2024-04-26T08:52:52.431928107Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-04-26T08:52:51.106Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | simulator | 2024-04-26 08:52:56,130 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-26 08:52:54,025] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | Preparing upgrade release version: 0900 grafana | logger=migrator t=2024-04-26T08:52:52.43493666Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.005663ms kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Shutdown completed; log sequence number 332532; transaction id 298 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-04-26T08:52:51.106Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-apex-pdp | auto.offset.reset = latest policy-pap | . ____ _ __ _ _ simulator | 2024-04-26 08:52:56,134 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-26 08:52:54,032] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | Preparing upgrade release version: 1000 grafana | logger=migrator t=2024-04-26T08:52:52.437922332Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] mariadbd: Shutdown complete policy-api | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-04-26T08:52:51.110Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ simulator | 2024-04-26 08:52:56,141 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-26 08:52:54,032] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | Preparing upgrade release version: 1100 grafana | logger=migrator t=2024-04-26T08:52:52.439168952Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.24607ms kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) mariadb | policy-api | :: Spring Boot :: (v3.1.10) prometheus | ts=2024-04-26T08:52:51.110Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.24µs policy-apex-pdp | check.crcs = true policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ simulator | 2024-04-26 08:52:56,201 INFO Session workerName=node0 zookeeper | [2024-04-26 08:52:54,033] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | Preparing upgrade release version: 1200 grafana | logger=migrator t=2024-04-26T08:52:52.545792843Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 08:53:01+00:00 [Note] [Entrypoint]: Temporary server stopped policy-api | prometheus | ts=2024-04-26T08:52:51.110Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) zookeeper | [2024-04-26 08:52:54,033] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | Preparing upgrade release version: 1300 grafana | logger=migrator t=2024-04-26T08:52:52.546921516Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.129413ms kafka | [2024-04-26 08:52:57,485] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) mariadb | policy-api | [2024-04-26T08:53:10.944+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final prometheus | ts=2024-04-26T08:52:51.111Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / zookeeper | [2024-04-26 08:52:54,034] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) policy-db-migrator | Done grafana | logger=migrator t=2024-04-26T08:52:52.550501737Z level=info msg="Executing migration" id="Update dashboard table charset" kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 08:53:01+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-api | [2024-04-26T08:53:11.011+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) prometheus | ts=2024-04-26T08:52:51.111Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=43.022µs wal_replay_duration=514.024µs wbl_replay_duration=310ns total_replay_duration=616.829µs policy-apex-pdp | client.id = consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-1 policy-apex-pdp | client.rack = policy-pap | =========|_|==============|___/=/_/_/_/ zookeeper | [2024-04-26 08:52:54,034] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) policy-db-migrator | name version grafana | logger=migrator t=2024-04-26T08:52:52.55054931Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=48.563µs kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) mariadb | policy-api | [2024-04-26T08:53:11.013+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" prometheus | ts=2024-04-26T08:52:51.114Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC simulator | 2024-04-26 08:52:56,781 INFO Using GSON for REST calls policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 zookeeper | [2024-04-26 08:52:54,034] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) policy-db-migrator | policyadmin 0 grafana | logger=migrator t=2024-04-26T08:52:52.554268606Z level=info msg="Executing migration" id="Update dashboard_tag table charset" kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-api | [2024-04-26T08:53:13.018+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. prometheus | ts=2024-04-26T08:52:51.114Z caller=main.go:1153 level=info msg="TSDB started" simulator | 2024-04-26 08:52:56,886 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} policy-pap | :: Spring Boot :: (v3.1.10) policy-apex-pdp | enable.auto.commit = true zookeeper | [2024-04-26 08:52:54,034] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 grafana | logger=migrator t=2024-04-26T08:52:52.554308578Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=36.102µs kafka | [2024-04-26 08:52:57,486] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-api | [2024-04-26T08:53:13.102+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. prometheus | ts=2024-04-26T08:52:51.114Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml simulator | 2024-04-26 08:52:56,899 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-pap | policy-apex-pdp | exclude.internal.topics = true zookeeper | [2024-04-26 08:52:54,035] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) policy-db-migrator | upgrade: 0 -> 1300 grafana | logger=migrator t=2024-04-26T08:52:52.56044045Z level=info msg="Executing migration" id="Add column folder_id in dashboard" kafka | [2024-04-26 08:52:57,486] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Number of transaction pools: 1 policy-api | [2024-04-26T08:53:13.593+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-04-26T08:52:51.115Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.373614ms db_storage=1.59µs remote_storage=2.73µs web_handler=490ns query_engine=1.08µs scrape=312.495µs scrape_sd=159.787µs notify=118.326µs notify_sd=14.13µs rules=2.19µs tracing=30.492µs simulator | 2024-04-26 08:52:56,913 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1660ms policy-pap | [2024-04-26T08:53:25.138+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-apex-pdp | fetch.max.bytes = 52428800 zookeeper | [2024-04-26 08:52:54,036] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.563427532Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.986472ms kafka | [2024-04-26 08:52:57,486] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-api | [2024-04-26T08:53:13.595+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-04-26T08:52:51.115Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." simulator | 2024-04-26 08:52:56,915 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4219 ms. policy-pap | [2024-04-26T08:53:25.196+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 36 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-apex-pdp | fetch.max.wait.ms = 500 zookeeper | [2024-04-26 08:52:54,036] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql grafana | logger=migrator t=2024-04-26T08:52:52.566878266Z level=info msg="Executing migration" id="Add column isFolder in dashboard" kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-api | [2024-04-26T08:53:14.336+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) prometheus | ts=2024-04-26T08:52:51.115Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." simulator | 2024-04-26 08:52:56,922 INFO org.onap.policy.models.simulators starting SDNC simulator policy-pap | [2024-04-26T08:53:25.197+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | fetch.min.bytes = 1 zookeeper | [2024-04-26 08:52:54,036] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.568922013Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.043428ms kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-api | [2024-04-26T08:53:14.352+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] simulator | 2024-04-26 08:52:56,925 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | group.id = 47b1e3a1-a4a9-4bf2-95ae-f10384287681 zookeeper | [2024-04-26 08:52:54,036] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-26T08:53:27.296+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-04-26T08:52:52.572252381Z level=info msg="Executing migration" id="Add column has_acl in dashboard" kafka | [2024-04-26 08:52:57,486] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-api | [2024-04-26T08:53:14.355+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-04-26T08:53:14.355+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-apex-pdp | group.instance.id = null zookeeper | [2024-04-26 08:52:54,036] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-26T08:53:27.392+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 86 ms. Found 7 JPA repository interfaces. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.574282048Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.029047ms kafka | [2024-04-26 08:52:57,489] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB simulator | 2024-04-26 08:52:56,925 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | [2024-04-26T08:53:14.470+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | heartbeat.interval.ms = 3000 zookeeper | [2024-04-26 08:52:54,036] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) policy-pap | [2024-04-26T08:53:27.911+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.57999614Z level=info msg="Executing migration" id="Add column uid in dashboard" kafka | [2024-04-26 08:52:57,493] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Completed initialization of buffer pool simulator | 2024-04-26 08:52:56,931 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | [2024-04-26T08:53:14.470+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3387 ms policy-apex-pdp | interceptor.classes = [] zookeeper | [2024-04-26 08:52:54,048] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) policy-pap | [2024-04-26T08:53:27.912+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.582025697Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.026136ms kafka | [2024-04-26 08:52:57,498] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) simulator | 2024-04-26 08:52:56,932 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-api | [2024-04-26T08:53:14.944+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-apex-pdp | internal.leave.group.on.close = true zookeeper | [2024-04-26 08:52:54,050] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | [2024-04-26T08:53:28.601+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-26T08:52:52.585006778Z level=info msg="Executing migration" id="Update uid column values in dashboard" kafka | [2024-04-26 08:52:57,505] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: 128 rollback segments are active. simulator | 2024-04-26 08:52:56,951 INFO Session workerName=node0 policy-api | [2024-04-26T08:53:15.006+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false zookeeper | [2024-04-26 08:52:54,051] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | [2024-04-26T08:53:28.611+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.585240149Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=232.921µs kafka | [2024-04-26 08:52:57,533] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... simulator | 2024-04-26 08:52:57,009 INFO Using GSON for REST calls policy-api | [2024-04-26T08:53:15.048+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-apex-pdp | isolation.level = read_uncommitted zookeeper | [2024-04-26 08:52:54,053] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-pap | [2024-04-26T08:53:28.614+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) grafana | logger=migrator t=2024-04-26T08:52:52.587707977Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" kafka | [2024-04-26 08:52:57,533] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. simulator | 2024-04-26 08:52:57,028 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} policy-api | [2024-04-26T08:53:15.333+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer zookeeper | [2024-04-26 08:52:54,063] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:28.614+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.588469413Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=761.076µs kafka | [2024-04-26 08:52:57,544] INFO Socket connection established, initiating session, client: /172.17.0.6:47466, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: log sequence number 332532; transaction id 299 simulator | 2024-04-26 08:52:57,030 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-api | [2024-04-26T08:53:15.363+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-apex-pdp | max.partition.fetch.bytes = 1048576 zookeeper | [2024-04-26 08:52:54,063] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:28.714+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.592954766Z level=info msg="Executing migration" id="Remove unique index org_id_slug" kafka | [2024-04-26 08:52:57,589] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000046b1e0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Note] Plugin 'FEEDBACK' is disabled. simulator | 2024-04-26 08:52:57,030 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1778ms policy-api | [2024-04-26T08:53:15.455+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@312b34e3 policy-apex-pdp | max.poll.interval.ms = 300000 zookeeper | [2024-04-26 08:52:54,063] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:28.714+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3447 ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.593708522Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=753.626µs kafka | [2024-04-26 08:52:57,727] INFO Session: 0x10000046b1e0000 closed (org.apache.zookeeper.ZooKeeper) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-api | [2024-04-26T08:53:15.458+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. simulator | 2024-04-26 08:52:57,031 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4896 ms. simulator | 2024-04-26 08:52:57,032 INFO org.onap.policy.models.simulators starting SO simulator zookeeper | [2024-04-26 08:52:54,063] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.160+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql grafana | logger=migrator t=2024-04-26T08:52:52.599025565Z level=info msg="Executing migration" id="Update dashboard title length" kafka | [2024-04-26 08:52:57,728] INFO EventThread shut down for session: 0x10000046b1e0000 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-04-26 8:53:01 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-api | [2024-04-26T08:53:17.713+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-apex-pdp | max.poll.records = 500 simulator | 2024-04-26 08:52:57,036 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | [2024-04-26 08:52:54,063] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.221+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.599052176Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.071µs kafka | Using log4j config /etc/kafka/log4j.properties mariadb | 2024-04-26 8:53:01 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-api | [2024-04-26T08:53:17.717+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-apex-pdp | metadata.max.age.ms = 300000 simulator | 2024-04-26 08:52:57,037 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-26 08:52:54,063] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.543+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-04-26T08:52:52.603209234Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" kafka | ===> Launching ... mariadb | 2024-04-26 8:53:01 0 [Note] Server socket created on IP: '0.0.0.0'. policy-api | [2024-04-26T08:53:18.818+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-apex-pdp | metric.reporters = [] simulator | 2024-04-26 08:52:57,038 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-26 08:52:54,063] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.639+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.60438471Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.179867ms kafka | ===> Launching kafka ... mariadb | 2024-04-26 8:53:01 0 [Note] Server socket created on IP: '::'. policy-api | [2024-04-26T08:53:19.724+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-apex-pdp | metrics.num.samples = 2 simulator | 2024-04-26 08:52:57,040 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 zookeeper | [2024-04-26 08:52:54,063] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.641+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.60838426Z level=info msg="Executing migration" id="create dashboard_provisioning" kafka | [2024-04-26 08:52:58,475] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) mariadb | 2024-04-26 8:53:01 0 [Note] mariadbd: ready for connections. policy-api | [2024-04-26T08:53:20.929+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-apex-pdp | metrics.recording.level = INFO simulator | 2024-04-26 08:52:57,043 INFO Session workerName=node0 zookeeper | [2024-04-26 08:52:54,063] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:29.676+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:52.609110404Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=725.984µs kafka | [2024-04-26 08:52:58,954] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-api | [2024-04-26T08:53:21.169+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@433e9108, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@70ac3a87, org.springframework.security.web.context.SecurityContextHolderFilter@1604ad0f, org.springframework.security.web.header.HeaderWriterFilter@519d1224, org.springframework.security.web.authentication.logout.LogoutFilter@28062dc2, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@605049be, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1d93bd2a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6f54a7be, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@45bf64f7, org.springframework.security.web.access.ExceptionTranslationFilter@4ce824a7, org.springframework.security.web.access.intercept.AuthorizationFilter@6d67e03] policy-apex-pdp | metrics.sample.window.ms = 30000 simulator | 2024-04-26 08:52:57,103 INFO Using GSON for REST calls zookeeper | [2024-04-26 08:52:54,063] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:31.194+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql grafana | logger=migrator t=2024-04-26T08:52:52.613840569Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" kafka | [2024-04-26 08:52:59,035] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) mariadb | 2024-04-26 8:53:01 0 [Note] InnoDB: Buffer pool(s) load completed at 240426 8:53:01 policy-api | [2024-04-26T08:53:22.028+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] simulator | 2024-04-26 08:52:57,117 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:31.206+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.619432785Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.594596ms kafka | [2024-04-26 08:52:59,037] INFO starting (kafka.server.KafkaServer) mariadb | 2024-04-26 8:53:01 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) policy-api | [2024-04-26T08:53:22.124+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-apex-pdp | receive.buffer.bytes = 65536 simulator | 2024-04-26 08:52:57,119 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:host.name=bd3eb536d2b4 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:31.711+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-04-26T08:52:52.623240486Z level=info msg="Executing migration" id="create dashboard_provisioning v2" mariadb | 2024-04-26 8:53:01 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) kafka | [2024-04-26 08:52:59,037] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) policy-api | [2024-04-26T08:53:22.150+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-apex-pdp | reconnect.backoff.max.ms = 1000 simulator | 2024-04-26 08:52:57,119 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1867ms zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-26T08:53:32.091+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:52.624016984Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=777.858µs mariadb | 2024-04-26 8:53:01 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-04-26 8:53:01 19 [Warning] Aborted connection 19 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-api | [2024-04-26T08:53:22.169+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.907 seconds (process running for 12.554) policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-pap | [2024-04-26T08:53:32.208+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository simulator | 2024-04-26 08:52:57,119 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. grafana | logger=migrator t=2024-04-26T08:52:52.627969861Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" kafka | [2024-04-26 08:52:59,058] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) policy-api | [2024-04-26T08:53:38.415+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | policy-pap | [2024-04-26T08:53:32.478+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-04-26 08:52:57,120 INFO org.onap.policy.models.simulators starting VFC simulator grafana | logger=migrator t=2024-04-26T08:52:52.628770629Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=800.538µs kafka | [2024-04-26 08:52:59,064] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) policy-api | [2024-04-26T08:53:38.415+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | policy-pap | allow.auto.create.topics = true simulator | 2024-04-26 08:52:57,122 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-04-26T08:52:52.633486494Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" kafka | [2024-04-26 08:52:59,064] INFO Client environment:host.name=324f31b114cb (org.apache.zookeeper.ZooKeeper) policy-api | [2024-04-26T08:53:38.417+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-pap | auto.commit.interval.ms = 5000 simulator | 2024-04-26 08:52:57,123 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-26T08:52:52.63426011Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=773.106µs kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) policy-api | [2024-04-26T08:53:38.740+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true simulator | 2024-04-26 08:52:57,127 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-26T08:52:52.637923835Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) policy-api | [] zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | sasl.login.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-pap | auto.offset.reset = latest simulator | 2024-04-26 08:52:57,128 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 grafana | logger=migrator t=2024-04-26T08:52:52.638371346Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=447.661µs kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.class = null zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | bootstrap.servers = [kafka:9092] simulator | 2024-04-26 08:52:57,136 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-26T08:52:52.642391677Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.connect.timeout.ms = null zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | check.crcs = true simulator | 2024-04-26 08:52:57,185 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-26T08:52:52.643164155Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=772.477µs kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.read.timeout.ms = null zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | client.dns.lookup = use_all_dns_ips simulator | 2024-04-26 08:52:57,194 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:52.647620266Z level=info msg="Executing migration" id="Add check_sum column" kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,064] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | client.id = consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-1 simulator | 2024-04-26 08:52:57,195 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} grafana | logger=migrator t=2024-04-26T08:52:52.649762098Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.141482ms kafka | [2024-04-26 08:52:59,064] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 zookeeper | [2024-04-26 08:52:54,065] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | client.rack = simulator | 2024-04-26 08:52:57,195 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1943ms grafana | logger=migrator t=2024-04-26T08:52:52.653556778Z level=info msg="Executing migration" id="Add index for dashboard_title" kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 zookeeper | [2024-04-26 08:52:54,065] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | connections.max.idle.ms = 540000 simulator | 2024-04-26 08:52:57,195 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4929 ms. grafana | logger=migrator t=2024-04-26T08:52:52.654359247Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=803.039µs kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 zookeeper | [2024-04-26 08:52:54,065] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | default.api.timeout.ms = 60000 simulator | 2024-04-26 08:52:57,197 INFO org.onap.policy.models.simulators started grafana | logger=migrator t=2024-04-26T08:52:52.65779778Z level=info msg="Executing migration" id="delete tags for deleted dashboards" kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | enable.auto.commit = true kafka | [2024-04-26 08:52:59,064] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.657974059Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=178.369µs policy-apex-pdp | sasl.login.retry.backoff.ms = 100 zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | exclude.internal.topics = true kafka | [2024-04-26 08:52:59,064] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.71266992Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-apex-pdp | sasl.mechanism = GSSAPI zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:52:54,065] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-26 08:52:59,064] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.712925743Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=256.844µs policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 zookeeper | [2024-04-26 08:52:54,066] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2024-04-26 08:52:54,067] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.717678148Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-apex-pdp | sasl.oauthbearer.expected.audience = null zookeeper | [2024-04-26 08:52:54,067] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- policy-pap | fetch.min.bytes = 1 kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.718985861Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.302212ms policy-apex-pdp | sasl.oauthbearer.expected.issuer = null zookeeper | [2024-04-26 08:52:54,068] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | policy-pap | group.id = c2be2c80-205d-4227-951f-9a7c12c2d5ee kafka | [2024-04-26 08:52:59,064] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.723518656Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 zookeeper | [2024-04-26 08:52:54,068] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | policy-pap | group.instance.id = null kafka | [2024-04-26 08:52:59,067] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-26T08:52:52.726741349Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.248345ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 zookeeper | [2024-04-26 08:52:54,068] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-26 08:52:59,070] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) grafana | logger=migrator t=2024-04-26T08:52:52.730586272Z level=info msg="Executing migration" id="create data_source table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 zookeeper | [2024-04-26 08:52:54,069] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-04-26 08:52:59,076] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-26T08:52:52.73160071Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.020088ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null zookeeper | [2024-04-26 08:52:54,069] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | internal.leave.group.on.close = true kafka | [2024-04-26 08:52:59,080] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) grafana | logger=migrator t=2024-04-26T08:52:52.736328035Z level=info msg="Executing migration" id="add index data_source.account_id" policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope zookeeper | [2024-04-26 08:52:54,069] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-26 08:52:59,084] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-26T08:52:52.737144654Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=816.469µs policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | zookeeper | [2024-04-26 08:52:54,069] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | isolation.level = read_uncommitted kafka | [2024-04-26 08:52:59,091] INFO Socket connection established, initiating session, client: /172.17.0.6:47468, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-26T08:52:52.741607517Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | zookeeper | [2024-04-26 08:52:54,069] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-26 08:52:59,102] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000046b1e0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-26T08:52:52.742456967Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=849.781µs policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql zookeeper | [2024-04-26 08:52:54,071] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-26 08:52:59,107] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) grafana | logger=migrator t=2024-04-26T08:52:52.746686638Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-apex-pdp | security.providers = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,071] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) kafka | [2024-04-26 08:52:59,642] INFO Cluster ID = rK8eMBuCRaO0vWITIr3dSg (kafka.server.KafkaServer) grafana | logger=migrator t=2024-04-26T08:52:52.747455915Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=768.517µs policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) zookeeper | [2024-04-26 08:52:54,071] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-26 08:52:59,647] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) grafana | logger=migrator t=2024-04-26T08:52:52.752455963Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,071] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-pap | max.poll.records = 500 kafka | [2024-04-26 08:52:59,705] INFO KafkaConfig values: grafana | logger=migrator t=2024-04-26T08:52:52.753571895Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.115993ms policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | zookeeper | [2024-04-26 08:52:54,071] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metadata.max.age.ms = 300000 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 grafana | logger=migrator t=2024-04-26T08:52:52.757750845Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | zookeeper | [2024-04-26 08:52:54,091] INFO Logging initialized @538ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-pap | metric.reporters = [] kafka | alter.config.policy.class.name = null grafana | logger=migrator t=2024-04-26T08:52:52.765585256Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.830092ms policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql zookeeper | [2024-04-26 08:52:54,175] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | metrics.num.samples = 2 kafka | alter.log.dirs.replication.quota.window.num = 11 grafana | logger=migrator t=2024-04-26T08:52:52.771013625Z level=info msg="Executing migration" id="create data_source table v2" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,175] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | metrics.recording.level = INFO kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-04-26T08:52:52.77216369Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.156296ms policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) zookeeper | [2024-04-26 08:52:54,194] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) policy-pap | metrics.sample.window.ms = 30000 kafka | authorizer.class.name = grafana | logger=migrator t=2024-04-26T08:52:52.775584182Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,235] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | auto.create.topics.enable = true grafana | logger=migrator t=2024-04-26T08:52:52.776478995Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=894.773µs policy-apex-pdp | ssl.key.password = null policy-db-migrator | zookeeper | [2024-04-26 08:52:54,235] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-pap | receive.buffer.bytes = 65536 kafka | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:52:52.780111917Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | zookeeper | [2024-04-26 08:52:54,236] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-pap | reconnect.backoff.max.ms = 1000 kafka | auto.leader.rebalance.enable = true grafana | logger=migrator t=2024-04-26T08:52:52.781031882Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=920.284µs policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql zookeeper | [2024-04-26 08:52:54,239] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-pap | reconnect.backoff.ms = 50 kafka | background.threads = 10 grafana | logger=migrator t=2024-04-26T08:52:52.785840891Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,250] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-pap | request.timeout.ms = 30000 kafka | broker.heartbeat.interval.ms = 2000 grafana | logger=migrator t=2024-04-26T08:52:52.786411648Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=570.697µs policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) zookeeper | [2024-04-26 08:52:54,273] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-pap | retry.backoff.ms = 100 kafka | broker.id = 1 grafana | logger=migrator t=2024-04-26T08:52:52.790822317Z level=info msg="Executing migration" id="Add column with_credentials" policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,277] INFO Started @723ms (org.eclipse.jetty.server.Server) policy-pap | sasl.client.callback.handler.class = null kafka | broker.id.generation.enable = true grafana | logger=migrator t=2024-04-26T08:52:52.793331376Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.509559ms policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | policy-pap | sasl.jaas.config = null kafka | broker.rack = null grafana | logger=migrator t=2024-04-26T08:52:52.796848373Z level=info msg="Executing migration" id="Add secure json data column" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | zookeeper | [2024-04-26 08:52:54,277] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | broker.session.timeout.ms = 9000 grafana | logger=migrator t=2024-04-26T08:52:52.799262968Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.413655ms policy-apex-pdp | ssl.provider = null policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql zookeeper | [2024-04-26 08:52:54,285] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | client.quota.callback.class = null grafana | logger=migrator t=2024-04-26T08:52:52.803601395Z level=info msg="Executing migration" id="Update data_source table charset" policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,286] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-pap | sasl.kerberos.service.name = null kafka | compression.type = producer grafana | logger=migrator t=2024-04-26T08:52:52.803628206Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.861µs policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) zookeeper | [2024-04-26 08:52:54,292] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2024-04-26 08:52:54,295] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | connection.failed.authentication.delay.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:52.807227237Z level=info msg="Executing migration" id="Update initial version to 1" policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | -------------- zookeeper | [2024-04-26 08:52:54,313] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2024-04-26 08:52:54,314] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | connections.max.idle.ms = 600000 grafana | logger=migrator t=2024-04-26T08:52:52.807466988Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=239.901µs policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 zookeeper | [2024-04-26 08:52:54,315] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) kafka | connections.max.reauth.ms = 0 grafana | logger=migrator t=2024-04-26T08:52:52.810965655Z level=info msg="Executing migration" id="Add read_only data column" policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 zookeeper | [2024-04-26 08:52:54,315] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) kafka | control.plane.listener.name = null grafana | logger=migrator t=2024-04-26T08:52:52.813446973Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.481289ms policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-pap | sasl.login.callback.handler.class = null zookeeper | [2024-04-26 08:52:54,320] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) kafka | controlled.shutdown.enable = true grafana | logger=migrator t=2024-04-26T08:52:52.817298627Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- policy-pap | sasl.login.class = null zookeeper | [2024-04-26 08:52:54,320] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | controlled.shutdown.max.retries = 3 grafana | logger=migrator t=2024-04-26T08:52:52.817538788Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=240.372µs policy-apex-pdp | policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.login.connect.timeout.ms = null zookeeper | [2024-04-26 08:52:54,323] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) kafka | controlled.shutdown.retry.backoff.ms = 5000 grafana | logger=migrator t=2024-04-26T08:52:52.822596349Z level=info msg="Executing migration" id="Update json_data with nulls" policy-apex-pdp | [2024-04-26T08:53:36.024+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null zookeeper | [2024-04-26 08:52:54,324] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | controller.listener.names = null grafana | logger=migrator t=2024-04-26T08:52:52.82284161Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=246.502µs policy-apex-pdp | [2024-04-26T08:53:36.024+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 zookeeper | [2024-04-26 08:52:54,324] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) kafka | controller.quorum.append.linger.ms = 25 grafana | logger=migrator t=2024-04-26T08:52:52.826661562Z level=info msg="Executing migration" id="Add uid column" policy-apex-pdp | [2024-04-26T08:53:36.024+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121616022 policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 zookeeper | [2024-04-26 08:52:54,335] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) kafka | controller.quorum.election.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:52:52.830447831Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.78626ms policy-apex-pdp | [2024-04-26T08:53:36.026+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-1, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-pap | sasl.login.refresh.window.factor = 0.8 zookeeper | [2024-04-26 08:52:54,336] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) kafka | controller.quorum.election.timeout.ms = 1000 grafana | logger=migrator t=2024-04-26T08:52:52.833879805Z level=info msg="Executing migration" id="Update uid value" policy-apex-pdp | [2024-04-26T08:53:36.039+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 zookeeper | [2024-04-26 08:52:54,352] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) kafka | controller.quorum.fetch.timeout.ms = 2000 grafana | logger=migrator t=2024-04-26T08:52:52.834092255Z level=info msg="Migration successfully executed" id="Update uid value" duration=212.74µs policy-apex-pdp | [2024-04-26T08:53:36.040+00:00|INFO|ServiceManager|main] service manager starting topics policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-pap | sasl.login.retry.backoff.max.ms = 10000 zookeeper | [2024-04-26 08:52:54,353] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) kafka | controller.quorum.request.timeout.ms = 2000 grafana | logger=migrator t=2024-04-26T08:52:52.838295465Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" policy-apex-pdp | [2024-04-26T08:53:36.041+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=47b1e3a1-a4a9-4bf2-95ae-f10384287681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 zookeeper | [2024-04-26 08:52:57,565] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) kafka | controller.quorum.retry.backoff.ms = 20 grafana | logger=migrator t=2024-04-26T08:52:52.839122015Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=821.699µs policy-apex-pdp | [2024-04-26T08:53:36.061+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | controller.quorum.voters = [] grafana | logger=migrator t=2024-04-26T08:52:52.842972267Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" policy-apex-pdp | allow.auto.create.topics = true policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-26T08:52:52.843793346Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=820.859µs policy-apex-pdp | auto.commit.interval.ms = 5000 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-26T08:52:52.848023958Z level=info msg="Executing migration" id="create api_key table" policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:52.849353041Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.331622ms policy-apex-pdp | auto.offset.reset = latest policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:52.855962716Z level=info msg="Executing migration" id="add index api_key.account_id" policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:52.857388093Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.426218ms policy-apex-pdp | check.crcs = true policy-db-migrator | policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:52:52.861720259Z level=info msg="Executing migration" id="add index api_key.key" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-apex-pdp | client.id = consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2 grafana | logger=migrator t=2024-04-26T08:52:52.862565179Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=845.76µs policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-04-26T08:52:52.867760217Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | controller.quota.window.num = 11 grafana | logger=migrator t=2024-04-26T08:52:52.868827087Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.065261ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | connections.max.idle.ms = 540000 kafka | controller.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-04-26T08:52:52.874051256Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https policy-apex-pdp | default.api.timeout.ms = 60000 kafka | controller.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:52.875043122Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=992.767µs policy-db-migrator | policy-pap | ssl.engine.factory.class = null policy-apex-pdp | enable.auto.commit = true kafka | create.topic.policy.class.name = null grafana | logger=migrator t=2024-04-26T08:52:52.878563821Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-db-migrator | policy-pap | ssl.key.password = null policy-apex-pdp | exclude.internal.topics = true kafka | default.replication.factor = 1 grafana | logger=migrator t=2024-04-26T08:52:52.879368209Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=808.188µs policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | fetch.max.bytes = 52428800 kafka | delegation.token.expiry.check.interval.ms = 3600000 grafana | logger=migrator t=2024-04-26T08:52:52.884031151Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | fetch.max.wait.ms = 500 kafka | delegation.token.expiry.time.ms = 86400000 grafana | logger=migrator t=2024-04-26T08:52:52.885212726Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.181536ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.keystore.key = null policy-apex-pdp | fetch.min.bytes = 1 kafka | delegation.token.master.key = null grafana | logger=migrator t=2024-04-26T08:52:52.890319179Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null policy-apex-pdp | group.id = 47b1e3a1-a4a9-4bf2-95ae-f10384287681 kafka | delegation.token.max.lifetime.ms = 604800000 grafana | logger=migrator t=2024-04-26T08:52:52.902071528Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.746629ms policy-db-migrator | policy-pap | ssl.keystore.password = null policy-apex-pdp | group.instance.id = null kafka | delegation.token.secret.key = null grafana | logger=migrator t=2024-04-26T08:52:52.906528011Z level=info msg="Executing migration" id="create api_key table v2" policy-db-migrator | policy-pap | ssl.keystore.type = JKS policy-apex-pdp | heartbeat.interval.ms = 3000 kafka | delete.records.purgatory.purge.interval.requests = 1 grafana | logger=migrator t=2024-04-26T08:52:52.907175071Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=652.821µs policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | interceptor.classes = [] kafka | delete.topic.enable = true grafana | logger=migrator t=2024-04-26T08:52:52.911616462Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-db-migrator | -------------- policy-pap | ssl.provider = null policy-apex-pdp | internal.leave.group.on.close = true kafka | early.start.listeners = null grafana | logger=migrator t=2024-04-26T08:52:52.912806459Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.189887ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false kafka | fetch.max.bytes = 57671680 grafana | logger=migrator t=2024-04-26T08:52:52.91682706Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | isolation.level = read_uncommitted kafka | fetch.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-04-26T08:52:52.918107091Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.281271ms policy-db-migrator | policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-26T08:52:52.922328981Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-db-migrator | policy-apex-pdp | max.partition.fetch.bytes = 1048576 kafka | group.consumer.heartbeat.interval.ms = 5000 policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:52:52.923203104Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=879.493µs policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-apex-pdp | max.poll.interval.ms = 300000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:52:52.95278817Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-db-migrator | -------------- policy-apex-pdp | max.poll.records = 500 kafka | group.consumer.max.session.timeout.ms = 60000 policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-26T08:52:52.953554656Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=771.596µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | metadata.max.age.ms = 300000 kafka | group.consumer.max.size = 2147483647 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-26T08:52:52.99692875Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-pap | grafana | logger=migrator t=2024-04-26T08:52:52.997534269Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=601.808µs policy-db-migrator | policy-apex-pdp | metrics.num.samples = 2 kafka | group.consumer.min.session.timeout.ms = 45000 policy-pap | [2024-04-26T08:53:32.634+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-26T08:52:53.001473226Z level=info msg="Executing migration" id="Update api_key table charset" policy-db-migrator | policy-apex-pdp | metrics.recording.level = INFO kafka | group.consumer.session.timeout.ms = 45000 policy-pap | [2024-04-26T08:53:32.634+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-26T08:52:53.001498558Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=22.161µs policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | metrics.sample.window.ms = 30000 kafka | group.coordinator.new.enable = false policy-pap | [2024-04-26T08:53:32.634+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121612632 grafana | logger=migrator t=2024-04-26T08:52:53.005079018Z level=info msg="Executing migration" id="Add expires to api_key table" policy-db-migrator | -------------- policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | group.coordinator.threads = 1 policy-pap | [2024-04-26T08:53:32.636+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-1, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-26T08:52:53.007713051Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.633663ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | receive.buffer.bytes = 65536 kafka | group.initial.rebalance.delay.ms = 3000 policy-pap | [2024-04-26T08:53:32.637+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-26T08:52:53.016577634Z level=info msg="Executing migration" id="Add service account foreign key" policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | group.max.session.timeout.ms = 1800000 policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-26T08:52:53.019007821Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.430036ms policy-db-migrator | policy-apex-pdp | reconnect.backoff.ms = 50 kafka | group.max.size = 2147483647 policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-26T08:52:53.026495137Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-db-migrator | policy-apex-pdp | request.timeout.ms = 30000 kafka | group.min.session.timeout.ms = 6000 policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:52:53.026673034Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=169.387µs policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-apex-pdp | retry.backoff.ms = 100 kafka | initial.broker.registration.timeout.ms = 60000 policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-26T08:52:53.032148313Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-db-migrator | -------------- policy-apex-pdp | sasl.client.callback.handler.class = null kafka | inter.broker.listener.name = PLAINTEXT policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-26T08:52:53.036199941Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.051417ms policy-apex-pdp | sasl.jaas.config = null kafka | inter.broker.protocol.version = 3.6-IV2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-26T08:52:53.04100359Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | kafka.metrics.polling.interval.secs = 10 policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-26T08:52:53.043991261Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.987201ms policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 kafka | kafka.metrics.reporters = [] policy-db-migrator | policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-04-26T08:52:53.049291362Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-apex-pdp | sasl.kerberos.service.name = null kafka | leader.imbalance.check.interval.seconds = 300 policy-db-migrator | policy-pap | client.rack = grafana | logger=migrator t=2024-04-26T08:52:53.050060055Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=768.093µs policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | leader.imbalance.per.broker.percentage = 10 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-26T08:52:53.053999087Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-26T08:52:53.054764001Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=760.914µs policy-apex-pdp | sasl.login.callback.handler.class = null kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-26T08:52:53.05934969Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-apex-pdp | sasl.login.class = null kafka | log.cleaner.backoff.ms = 15000 policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-26T08:52:53.060658718Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.308358ms policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-26T08:52:53.064458463Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-apex-pdp | sasl.login.read.timeout.ms = null kafka | log.cleaner.delete.retention.ms = 86400000 policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-26T08:52:53.065221316Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=762.363µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | log.cleaner.enable = true policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-26T08:52:53.068816414Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-db-migrator | -------------- policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-26T08:52:53.06965473Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=838.136µs policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | log.cleaner.io.buffer.size = 524288 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-26T08:52:53.074159826Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-26T08:52:53.0749254Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=763.954µs policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-26T08:52:53.079363933Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-26T08:52:53.079469708Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=106.325µs policy-apex-pdp | sasl.mechanism = GSSAPI kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-26T08:52:53.083553217Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | log.cleaner.threads = 1 policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-26T08:52:53.083594508Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=44.882µs policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | log.cleanup.policy = [delete] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-26T08:52:53.088275412Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | log.dir = /tmp/kafka-logs policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-26T08:52:53.092453075Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.177183ms kafka | log.dirs = /var/lib/kafka/data policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:53.096285562Z level=info msg="Executing migration" id="Add encrypted dashboard json column" kafka | log.flush.interval.messages = 9223372036854775807 policy-db-migrator | policy-pap | max.poll.records = 500 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:53.098893805Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.608334ms kafka | log.flush.interval.ms = null policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.103600741Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-db-migrator | -------------- policy-pap | metric.reporters = [] policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:53.103663044Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=62.063µs kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | metrics.num.samples = 2 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-26T08:52:53.108700364Z level=info msg="Executing migration" id="create quota table v1" kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.110271782Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.583139ms kafka | log.index.interval.bytes = 4096 policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:52:53.115645656Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" kafka | log.index.size.max.bytes = 10485760 policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-04-26T08:52:53.116965724Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.314338ms kafka | log.local.retention.bytes = -2 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-pap | receive.buffer.bytes = 65536 policy-apex-pdp | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-26T08:52:53.121416148Z level=info msg="Executing migration" id="Update quota table charset" kafka | log.local.retention.ms = -2 policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-26T08:52:53.121448869Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=32.481µs kafka | log.message.downconversion.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:53.125067348Z level=info msg="Executing migration" id="create plugin_setting table" kafka | log.message.format.version = 3.0-IV1 policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:53.125872253Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=804.625µs kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-db-migrator | policy-pap | retry.backoff.ms = 100 policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:52:53.131710117Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:52:53.133474565Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.768358ms kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-pap | sasl.jaas.config = null policy-apex-pdp | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-26T08:52:53.137847475Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" kafka | log.message.timestamp.type = CreateTime policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:52:53.142393104Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.541328ms kafka | log.preallocate = false policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.146304224Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | ssl.key.password = null policy-db-migrator | kafka | log.retention.bytes = -1 grafana | logger=migrator t=2024-04-26T08:52:53.146333785Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.281µs policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | log.retention.check.interval.ms = 300000 grafana | logger=migrator t=2024-04-26T08:52:53.149859799Z level=info msg="Executing migration" id="create session table" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | log.retention.hours = 168 grafana | logger=migrator t=2024-04-26T08:52:53.150911405Z level=info msg="Migration successfully executed" id="create session table" duration=1.050826ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | log.retention.minutes = null grafana | logger=migrator t=2024-04-26T08:52:53.155708495Z level=info msg="Executing migration" id="Drop old table playlist table" policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | -------------- kafka | log.retention.ms = null grafana | logger=migrator t=2024-04-26T08:52:53.155803469Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=91.284µs policy-pap | sasl.login.class = null policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | kafka | log.roll.hours = 168 grafana | logger=migrator t=2024-04-26T08:52:53.15927121Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | kafka | log.roll.jitter.hours = 0 grafana | logger=migrator t=2024-04-26T08:52:53.159349263Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=78.193µs policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | log.roll.jitter.ms = null grafana | logger=migrator t=2024-04-26T08:52:53.164290399Z level=info msg="Executing migration" id="create playlist table v2" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | ssl.provider = null policy-db-migrator | -------------- kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.segment.delete.delay.ms = 60000 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | max.connection.creation.rate = 2147483647 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | kafka | max.connections = 2147483647 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | kafka | max.connections.per.ip = 2147483647 policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql kafka | max.connections.per.ip.overrides = policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | max.incremental.fetch.session.cache.slots = 1000 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | message.max.bytes = 1048588 policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | kafka | metadata.log.dir = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | [2024-04-26T08:53:36.070+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | [2024-04-26T08:53:36.070+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-04-26T08:53:36.070+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121616070 policy-db-migrator | kafka | metadata.log.segment.bytes = 1073741824 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | [2024-04-26T08:53:36.070+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql kafka | metadata.log.segment.min.bytes = 8388608 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.16500816Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=717.831µs policy-apex-pdp | [2024-04-26T08:53:36.071+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=282d8d00-7dbf-4c91-af96-04a7263f55b3, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | metadata.log.segment.ms = 604800000 policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:53.171825267Z level=info msg="Executing migration" id="create playlist item table v2" policy-apex-pdp | [2024-04-26T08:53:36.085+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | metadata.max.idle.interval.ms = 500 policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-26T08:52:53.172944427Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.118789ms policy-apex-pdp | acks = -1 policy-db-migrator | -------------- kafka | metadata.max.retention.bytes = 104857600 policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.17737965Z level=info msg="Executing migration" id="Update playlist table charset" policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | kafka | metadata.max.retention.ms = 604800000 policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:52:53.177420531Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=42.212µs policy-apex-pdp | batch.size = 16384 policy-db-migrator | kafka | metric.reporters = [] policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:52:53.181808303Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql kafka | metrics.num.samples = 2 policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-26T08:52:53.181846465Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=38.622µs policy-apex-pdp | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | metrics.recording.level = INFO policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-26T08:52:53.186661405Z level=info msg="Executing migration" id="Add playlist column created_at" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | metrics.sample.window.ms = 30000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:53.189707628Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.045193ms policy-apex-pdp | client.id = producer-1 policy-db-migrator | -------------- kafka | min.insync.replicas = 1 policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:53.193840828Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-apex-pdp | compression.type = none policy-db-migrator | kafka | node.id = 1 policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:52:53.196917632Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.076424ms policy-apex-pdp | connections.max.idle.ms = 540000 policy-db-migrator | kafka | num.io.threads = 8 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:52:53.201541814Z level=info msg="Executing migration" id="drop preferences table v2" policy-apex-pdp | delivery.timeout.ms = 120000 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql kafka | num.network.threads = 3 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-26T08:52:53.201626228Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=84.634µs policy-apex-pdp | enable.idempotence = true policy-db-migrator | -------------- kafka | num.partitions = 1 policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:52:53.205157682Z level=info msg="Executing migration" id="drop preferences table v3" policy-apex-pdp | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | num.recovery.threads.per.data.dir = 1 policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-26T08:52:53.205254036Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=96.075µs policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | num.replica.alter.log.dirs.threads = null policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:52:53.210192951Z level=info msg="Executing migration" id="create preferences table v3" policy-apex-pdp | linger.ms = 0 policy-db-migrator | kafka | num.replica.fetchers = 1 policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:52:53.211455496Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.266285ms policy-apex-pdp | max.block.ms = 60000 policy-db-migrator | kafka | offset.metadata.max.bytes = 4096 policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:52:53.215952393Z level=info msg="Executing migration" id="Update preferences table charset" policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql kafka | offsets.commit.required.acks = -1 policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:52:53.215996405Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=44.632µs policy-apex-pdp | max.request.size = 1048576 policy-db-migrator | -------------- kafka | offsets.commit.timeout.ms = 5000 policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:52:53.219941566Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-apex-pdp | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | offsets.load.buffer.size = 5242880 policy-pap | ssl.keystore.type = JKS policy-apex-pdp | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- kafka | offsets.retention.check.interval.ms = 600000 policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-26T08:52:53.222963719Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.021403ms policy-apex-pdp | metric.reporters = [] policy-db-migrator | kafka | offsets.retention.minutes = 10080 policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-26T08:52:53.227847211Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | kafka | offsets.topic.compression.codec = 0 policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-26T08:52:53.227997828Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=150.727µs policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql kafka | offsets.topic.num.partitions = 50 policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-26T08:52:53.231972352Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | offsets.topic.replication.factor = 1 policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-26T08:52:53.235629901Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.653299ms policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | offsets.topic.segment.bytes = 104857600 policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:52:53.239694639Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-db-migrator | -------------- kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:52:53.243660332Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.965732ms policy-apex-pdp | partitioner.class = null policy-db-migrator | kafka | password.encoder.iterations = 4096 policy-pap | ssl.truststore.type = JKS policy-apex-pdp | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-26T08:52:53.247466957Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-db-migrator | kafka | password.encoder.key.length = 128 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-26T08:52:53.247560171Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=93.134µs policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql kafka | password.encoder.keyfactory.algorithm = null policy-apex-pdp | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:52:53.253353084Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | -------------- kafka | password.encoder.old.secret = null policy-pap | policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-26T08:52:53.254264834Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=911.74µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | password.encoder.secret = null policy-pap | [2024-04-26T08:53:32.643+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:53.258812062Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | -------------- kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-pap | [2024-04-26T08:53:32.643+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | retries = 2147483647 grafana | logger=migrator t=2024-04-26T08:52:53.25967134Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=856.508µs policy-db-migrator | kafka | process.roles = [] policy-pap | [2024-04-26T08:53:32.643+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121612643 policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:53.263704866Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | kafka | producer.id.expiration.check.interval.ms = 600000 policy-pap | [2024-04-26T08:53:32.643+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:52:53.264790673Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.085107ms policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | producer.id.expiration.ms = 86400000 policy-pap | [2024-04-26T08:53:32.927+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:52:53.269613384Z level=info msg="Executing migration" id="add index alert org_id & id " policy-db-migrator | -------------- kafka | producer.purgatory.purge.interval.requests = 1000 policy-pap | [2024-04-26T08:53:33.063+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-26T08:52:53.270752974Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.136231ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | queued.max.request.bytes = -1 policy-pap | [2024-04-26T08:53:33.291+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@78ea700f, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@cd93621, org.springframework.security.web.context.SecurityContextHolderFilter@18b58c77, org.springframework.security.web.header.HeaderWriterFilter@5ccc971e, org.springframework.security.web.authentication.logout.LogoutFilter@333a2df2, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3051e476, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3c20e9d6, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@42805abe, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3b1137b0, org.springframework.security.web.access.ExceptionTranslationFilter@1f6d7e7c, org.springframework.security.web.access.intercept.AuthorizationFilter@31c0c7e5] policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-26T08:52:53.275165806Z level=info msg="Executing migration" id="add index alert state" policy-db-migrator | -------------- kafka | queued.max.requests = 500 policy-pap | [2024-04-26T08:53:33.996+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-26T08:52:53.276510145Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.344289ms policy-db-migrator | kafka | quota.window.num = 11 policy-pap | [2024-04-26T08:53:34.098+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:52:53.281067613Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-db-migrator | kafka | quota.window.size.seconds = 1 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:52:53.28189295Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=826.927µs policy-pap | [2024-04-26T08:53:34.120+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:52:53.286045071Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-pap | [2024-04-26T08:53:34.139+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | -------------- kafka | remote.log.manager.task.interval.ms = 30000 policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-04-26T08:52:53.287032273Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=987.603µs policy-pap | [2024-04-26T08:53:34.139+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-26T08:52:53.291124982Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-pap | [2024-04-26T08:53:34.140+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | -------------- kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-26T08:52:53.292476491Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.351039ms policy-pap | [2024-04-26T08:53:34.141+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | kafka | remote.log.manager.task.retry.jitter = 0.2 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-26T08:52:53.298255153Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-pap | [2024-04-26T08:53:34.141+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | kafka | remote.log.manager.thread.pool.size = 10 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-26T08:52:53.299918556Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.667913ms policy-pap | [2024-04-26T08:53:34.141+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | > upgrade 0460-pdppolicystatus.sql kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:52:53.304168262Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-pap | [2024-04-26T08:53:34.141+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | -------------- kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:52:53.314584216Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.417153ms policy-pap | [2024-04-26T08:53:34.143+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2be2c80-205d-4227-951f-9a7c12c2d5ee, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@75ef1392 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | remote.log.metadata.manager.class.path = null policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:53.318283268Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-pap | [2024-04-26T08:53:34.155+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2be2c80-205d-4227-951f-9a7c12c2d5ee, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | -------------- kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:53.318867203Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=580.485µs policy-pap | [2024-04-26T08:53:34.156+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | kafka | remote.log.metadata.manager.listener.name = null policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-26T08:52:53.322885878Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-pap | allow.auto.create.topics = true policy-db-migrator | kafka | remote.log.reader.max.pending.tasks = 100 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-26T08:52:53.324333921Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.446723ms policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | > upgrade 0470-pdp.sql kafka | remote.log.reader.threads = 10 policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-26T08:52:53.328490532Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- kafka | remote.log.storage.manager.class.name = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-26T08:52:53.328920971Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=430.209µs policy-pap | auto.offset.reset = latest policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | remote.log.storage.manager.class.path = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-26T08:52:53.332813201Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:53.333347244Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=533.513µs policy-pap | check.crcs = true policy-db-migrator | kafka | remote.log.storage.system.enable = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:53.337804529Z level=info msg="Executing migration" id="create alert_notification table v1" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | replica.fetch.backoff.ms = 1000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.338558602Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=754.433µs policy-pap | client.id = consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3 policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | replica.fetch.max.bytes = 1048576 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:53.342529475Z level=info msg="Executing migration" id="Add column is_default" policy-pap | client.rack = policy-db-migrator | -------------- kafka | replica.fetch.min.bytes = 1 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-26T08:52:53.34860886Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.077725ms policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) kafka | replica.fetch.response.max.bytes = 10485760 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:53.353002302Z level=info msg="Executing migration" id="Add column frequency" policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- kafka | replica.fetch.wait.max.ms = 500 grafana | logger=migrator t=2024-04-26T08:52:53.356473973Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.471611ms policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | enable.auto.commit = true policy-db-migrator | kafka | replica.high.watermark.checkpoint.interval.ms = 5000 grafana | logger=migrator t=2024-04-26T08:52:53.361340426Z level=info msg="Executing migration" id="Add column send_reminder" policy-apex-pdp | security.providers = null policy-pap | exclude.internal.topics = true policy-db-migrator | kafka | replica.lag.time.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:53.364820957Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.480261ms policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql grafana | logger=migrator t=2024-04-26T08:52:53.368076699Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | replica.selector.class = null policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.371550792Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.471682ms policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | replica.socket.receive.buffer.bytes = 65536 policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-26T08:52:53.375901221Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | replica.socket.timeout.ms = 30000 policy-pap | group.id = c2be2c80-205d-4227-951f-9a7c12c2d5ee grafana | logger=migrator t=2024-04-26T08:52:53.376729277Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=827.856µs policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | replication.quota.window.num = 11 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-26T08:52:53.380444919Z level=info msg="Executing migration" id="Update alert table charset" policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | replication.quota.window.size.seconds = 1 policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-26T08:52:53.380471201Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.671µs policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | kafka | request.timeout.ms = 30000 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-26T08:52:53.385026249Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-apex-pdp | ssl.key.password = null policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | reserved.broker.max.id = 1000 grafana | logger=migrator t=2024-04-26T08:52:53.38506585Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=40.332µs policy-db-migrator | -------------- kafka | sasl.client.callback.handler.class = null policy-pap | internal.leave.group.on.close = true policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:52:53.388291282Z level=info msg="Executing migration" id="create notification_journal table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | sasl.enabled.mechanisms = [GSSAPI] policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:52:53.389473343Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.181891ms policy-db-migrator | -------------- kafka | sasl.jaas.config = null policy-pap | isolation.level = read_uncommitted policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:52:53.39331144Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-db-migrator | kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:52:53.394178569Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=866.559µs policy-db-migrator | kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:52:53.399078172Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-26T08:52:53.400281445Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.199802ms policy-db-migrator | -------------- kafka | sasl.kerberos.service.name = null policy-pap | max.poll.records = 500 policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.40543559Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.406671143Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.235023ms kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | metric.reporters = [] policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.500950477Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | sasl.login.callback.handler.class = null policy-pap | metrics.num.samples = 2 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.501919909Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=970.392µs kafka | sasl.login.class = null policy-pap | metrics.recording.level = INFO policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-04-26T08:52:53.506778111Z level=info msg="Executing migration" id="Add for to alert table" kafka | sasl.login.connect.timeout.ms = null policy-pap | metrics.sample.window.ms = 30000 policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.510446561Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.665289ms kafka | sasl.login.read.timeout.ms = null policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.538501275Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | sasl.login.refresh.buffer.seconds = 300 policy-pap | receive.buffer.bytes = 65536 policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.541147Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.645415ms kafka | sasl.login.refresh.min.period.seconds = 60 policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | transaction.timeout.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.558203944Z level=info msg="Executing migration" id="Update uid column values in alert_notification" kafka | sasl.login.refresh.window.factor = 0.8 policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | transactional.id = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.55834033Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=136.266µs kafka | sasl.login.refresh.window.jitter = 0.05 policy-pap | request.timeout.ms = 30000 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-04-26T08:52:53.563776357Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" kafka | sasl.login.retry.backoff.max.ms = 10000 policy-pap | retry.backoff.ms = 100 policy-apex-pdp | policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.564401644Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=624.687µs kafka | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-04-26T08:53:36.095+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-26T08:52:53.569045937Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | sasl.mechanism.controller.protocol = GSSAPI policy-pap | sasl.jaas.config = null policy-apex-pdp | [2024-04-26T08:53:36.113+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.570296402Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.249585ms kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | [2024-04-26T08:53:36.113+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.574234544Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-04-26T08:53:36.113+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121616113 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.577923605Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.6878ms kafka | sasl.oauthbearer.expected.audience = null policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | [2024-04-26T08:53:36.113+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=282d8d00-7dbf-4c91-af96-04a7263f55b3, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-04-26T08:52:53.582366809Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" kafka | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | [2024-04-26T08:53:36.114+00:00|INFO|ServiceManager|main] service manager starting set alive policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.582430832Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=64.262µs kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | [2024-04-26T08:53:36.114+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.585542107Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | [2024-04-26T08:53:36.115+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.586368743Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=825.876µs kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.login.class = null policy-apex-pdp | [2024-04-26T08:53:36.115+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.589841725Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-04-26T08:53:36.117+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.590767215Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=921.14µs kafka | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | [2024-04-26T08:53:36.117+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-04-26T08:52:53.596304027Z level=info msg="Executing migration" id="Drop old annotation table v4" kafka | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-04-26T08:53:36.117+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.59638789Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=83.763µs kafka | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-04-26T08:53:36.117+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=47b1e3a1-a4a9-4bf2-95ae-f10384287681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.601624059Z level=info msg="Executing migration" id="create annotation table v5" kafka | sasl.server.callback.handler.class = null policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | [2024-04-26T08:53:36.118+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=47b1e3a1-a4a9-4bf2-95ae-f10384287681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.603007399Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.38081ms kafka | sasl.server.max.receive.size = 524288 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-04-26T08:53:36.118+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.606771144Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | security.inter.broker.protocol = PLAINTEXT policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-04-26T08:53:36.133+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.608105092Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.333317ms kafka | security.providers = null policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | [] policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-04-26T08:52:53.612661251Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | server.max.startup.time.ms = 9223372036854775807 policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | [2024-04-26T08:53:36.135+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.613547289Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=885.178µs kafka | socket.connection.setup.timeout.max.ms = 30000 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"722ef4f9-8fa3-4127-9611-57d2993af39e","timestampMs":1714121616117,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-26T08:52:53.616881704Z level=info msg="Executing migration" id="add index annotation 2 v3" kafka | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | [2024-04-26T08:53:36.297+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.617760312Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=875.888µs kafka | socket.listen.backlog.size = 50 policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | [2024-04-26T08:53:36.297+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | kafka | socket.receive.buffer.bytes = 102400 policy-apex-pdp | [2024-04-26T08:53:36.297+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.621892343Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | socket.request.max.bytes = 104857600 policy-apex-pdp | [2024-04-26T08:53:36.297+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-04-26T08:52:53.622812493Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=919.36µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | socket.send.buffer.bytes = 102400 policy-apex-pdp | [2024-04-26T08:53:36.307+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.628108084Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | ssl.cipher.suites = [] policy-apex-pdp | [2024-04-26T08:53:36.307+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.629022615Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=914.02µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | ssl.client.auth = none policy-apex-pdp | [2024-04-26T08:53:36.308+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.632993497Z level=info msg="Executing migration" id="Update annotation table charset" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | policy-apex-pdp | [2024-04-26T08:53:36.307+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-26T08:52:53.633019579Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=26.131µs policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | ssl.endpoint.identification.algorithm = https policy-db-migrator | policy-apex-pdp | [2024-04-26T08:53:36.465+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: rK8eMBuCRaO0vWITIr3dSg grafana | logger=migrator t=2024-04-26T08:52:53.636903728Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-apex-pdp | [2024-04-26T08:53:36.465+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Cluster ID: rK8eMBuCRaO0vWITIr3dSg grafana | logger=migrator t=2024-04-26T08:52:53.64382643Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.921942ms policy-pap | security.protocol = PLAINTEXT kafka | ssl.key.password = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-26T08:53:36.467+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 grafana | logger=migrator t=2024-04-26T08:52:53.648747165Z level=info msg="Executing migration" id="Drop category_id index" policy-pap | security.providers = null kafka | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-apex-pdp | [2024-04-26T08:53:36.468+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-04-26T08:52:53.649665615Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=921.08µs policy-pap | send.buffer.bytes = 131072 kafka | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- policy-apex-pdp | [2024-04-26T08:53:36.474+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] (Re-)joining group grafana | logger=migrator t=2024-04-26T08:52:53.699667337Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-pap | session.timeout.ms = 45000 kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null policy-apex-pdp | [2024-04-26T08:53:36.493+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Request joining group due to: need to re-join with the given member-id: consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff grafana | logger=migrator t=2024-04-26T08:52:53.706807257Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=7.139381ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | ssl.keystore.password = null policy-apex-pdp | [2024-04-26T08:53:36.494+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=migrator t=2024-04-26T08:52:53.710965079Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | ssl.keystore.type = JKS policy-apex-pdp | [2024-04-26T08:53:36.494+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] (Re-)joining group grafana | logger=migrator t=2024-04-26T08:52:53.71167795Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=712.921µs policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql kafka | ssl.principal.mapping.rules = DEFAULT policy-apex-pdp | [2024-04-26T08:53:36.935+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls grafana | logger=migrator t=2024-04-26T08:52:53.716368185Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | ssl.protocol = TLSv1.3 policy-apex-pdp | [2024-04-26T08:53:36.936+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls grafana | logger=migrator t=2024-04-26T08:52:53.717277155Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=907.879µs policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | ssl.provider = null policy-apex-pdp | [2024-04-26T08:53:39.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Successfully joined group with generation Generation{generationId=1, memberId='consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff', protocol='range'} grafana | logger=migrator t=2024-04-26T08:52:53.720688363Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- kafka | ssl.secure.random.implementation = null policy-apex-pdp | [2024-04-26T08:53:39.511+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Finished assignment for group at generation 1: {consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=migrator t=2024-04-26T08:52:53.721499769Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=812.596µs policy-pap | ssl.key.password = null policy-db-migrator | kafka | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-04-26T08:53:39.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Successfully synced group in generation Generation{generationId=1, memberId='consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff', protocol='range'} grafana | logger=migrator t=2024-04-26T08:52:53.724812514Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | kafka | ssl.truststore.certificates = null policy-apex-pdp | [2024-04-26T08:53:39.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:52:53.737033226Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.219883ms policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | ssl.truststore.location = null policy-apex-pdp | [2024-04-26T08:53:39.521+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-04-26T08:53:39.530+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-04-26T08:52:53.741348585Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | -------------- kafka | ssl.truststore.password = null policy-pap | ssl.keystore.key = null policy-apex-pdp | [2024-04-26T08:53:39.538+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2, groupId=47b1e3a1-a4a9-4bf2-95ae-f10384287681] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-04-26T08:52:53.741883878Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=535.093µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) kafka | ssl.truststore.type = JKS policy-pap | ssl.keystore.location = null policy-apex-pdp | [2024-04-26T08:53:56.117+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.74536049Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-pap | ssl.keystore.password = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9071b0a9-a991-4e9b-80ca-5faa4ea251c7","timestampMs":1714121636117,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-26T08:52:53.745987757Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=626.727µs policy-db-migrator | -------------- kafka | transaction.max.timeout.ms = 900000 policy-pap | ssl.keystore.type = JKS policy-apex-pdp | [2024-04-26T08:53:56.136+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.749811444Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | kafka | transaction.partition.verification.enable = true policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9071b0a9-a991-4e9b-80ca-5faa4ea251c7","timestampMs":1714121636117,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-26T08:52:53.750282205Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=474.931µs policy-db-migrator | kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-pap | ssl.provider = null policy-apex-pdp | [2024-04-26T08:53:56.138+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-26T08:52:53.755383477Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-db-migrator | > upgrade 0610-toscanodetemplates.sql kafka | transaction.state.log.load.buffer.size = 5242880 policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-04-26T08:53:56.163+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.3 - policyadmin [26/Apr/2024:08:53:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-26T08:52:53.756290807Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=904.129µs policy-db-migrator | -------------- kafka | transaction.state.log.min.isr = 2 policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-04-26T08:53:56.272+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.760573773Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) kafka | transaction.state.log.num.partitions = 50 policy-pap | ssl.truststore.certificates = null policy-apex-pdp | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","timestampMs":1714121636220,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:53.760847616Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=273.633µs policy-db-migrator | -------------- kafka | transaction.state.log.replication.factor = 3 policy-pap | ssl.truststore.location = null policy-apex-pdp | [2024-04-26T08:53:56.280+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher grafana | logger=migrator t=2024-04-26T08:52:53.765757589Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | kafka | transaction.state.log.segment.bytes = 104857600 policy-pap | ssl.truststore.password = null policy-apex-pdp | [2024-04-26T08:53:56.281+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.770137311Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.376292ms policy-db-migrator | kafka | transactional.id.expiration.ms = 604800000 policy-pap | ssl.truststore.type = JKS policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87746640-debc-4467-b40b-24ebe64c2235","timestampMs":1714121636281,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:53.775250124Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | unclean.leader.election.enable = false policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-04-26T08:53:56.284+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.779273999Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.020585ms policy-db-migrator | -------------- kafka | unstable.api.versions.enable = false policy-pap | policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"272d55b7-aa0f-4aa3-9199-323c887dccf3","timestampMs":1714121636284,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:53.78317224Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | zookeeper.clientCnxnSocket = null policy-pap | [2024-04-26T08:53:34.162+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-26T08:53:56.296+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:53.784034727Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=861.497µs policy-db-migrator | -------------- kafka | zookeeper.connect = zookeeper:2181 policy-pap | [2024-04-26T08:53:34.162+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87746640-debc-4467-b40b-24ebe64c2235","timestampMs":1714121636281,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:53.787659805Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-db-migrator | kafka | zookeeper.connection.timeout.ms = null policy-pap | [2024-04-26T08:53:34.162+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121614162 policy-apex-pdp | [2024-04-26T08:53:56.296+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-26T08:52:53.788534713Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=874.588µs policy-db-migrator | kafka | zookeeper.max.in.flight.requests = 10 policy-pap | [2024-04-26T08:53:34.162+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-26T08:52:53.794103146Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-apex-pdp | [2024-04-26T08:53:56.304+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | zookeeper.metadata.migration.enable = false policy-pap | [2024-04-26T08:53:34.163+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher grafana | logger=migrator t=2024-04-26T08:52:53.794426761Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=323.155µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"272d55b7-aa0f-4aa3-9199-323c887dccf3","timestampMs":1714121636284,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | zookeeper.metadata.migration.min.batch.size = 200 grafana | logger=migrator t=2024-04-26T08:52:53.799214339Z level=info msg="Executing migration" id="Add epoch_end column" policy-apex-pdp | [2024-04-26T08:53:56.304+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-pap | [2024-04-26T08:53:34.163+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cecb7d84-0274-42c7-b3cd-cefaad5f8f13, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a2b3aff kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-04-26T08:52:53.806266377Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.052618ms policy-apex-pdp | [2024-04-26T08:53:56.324+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:34.163+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cecb7d84-0274-42c7-b3cd-cefaad5f8f13, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-04-26T08:52:53.810505132Z level=info msg="Executing migration" id="Add index for epoch_end" policy-apex-pdp | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","timestampMs":1714121636221,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | [2024-04-26T08:53:34.163+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:52:53.811395801Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=889.959µs policy-apex-pdp | [2024-04-26T08:53:56.326+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | allow.auto.create.topics = true kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-04-26T08:52:53.814961906Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"278d5e40-e6f4-489b-b28a-d250a3fd93a9","timestampMs":1714121636326,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-pap | auto.commit.interval.ms = 5000 kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-04-26T08:52:53.815127614Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=165.817µs policy-apex-pdp | [2024-04-26T08:53:56.338+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-04-26T08:52:53.819249153Z level=info msg="Executing migration" id="Move region to single row" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"278d5e40-e6f4-489b-b28a-d250a3fd93a9","timestampMs":1714121636326,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | auto.offset.reset = latest kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-04-26T08:52:53.819657502Z level=info msg="Migration successfully executed" id="Move region to single row" duration=408.378µs policy-apex-pdp | [2024-04-26T08:53:56.338+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:52:53.823009588Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-apex-pdp | [2024-04-26T08:53:56.406+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | check.crcs = true kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:52:53.82420806Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.197991ms policy-apex-pdp | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d504a7a0-8924-4423-91ec-dde2e6acb62c","timestampMs":1714121636366,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-04-26T08:52:53.828725247Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-apex-pdp | [2024-04-26T08:53:56.408+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | client.id = consumer-policy-pap-4 kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-04-26T08:52:53.82996028Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.235253ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d504a7a0-8924-4423-91ec-dde2e6acb62c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c39081f3-1dfd-48f5-8850-b37b33cc4fde","timestampMs":1714121636408,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | client.rack = kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-04-26T08:52:53.834616304Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-apex-pdp | [2024-04-26T08:53:56.414+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | connections.max.idle.ms = 540000 kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:52:53.835553394Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=936.53µs policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d504a7a0-8924-4423-91ec-dde2e6acb62c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c39081f3-1dfd-48f5-8850-b37b33cc4fde","timestampMs":1714121636408,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:52:53.839012695Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-apex-pdp | [2024-04-26T08:53:56.414+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | kafka | zookeeper.ssl.truststore.type = null grafana | logger=migrator t=2024-04-26T08:52:53.839925096Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=911.57µs policy-apex-pdp | [2024-04-26T08:54:56.084+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.3 - policyadmin [26/Apr/2024:08:54:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" policy-pap | enable.auto.commit = true policy-db-migrator | kafka | (kafka.server.KafkaConfig) grafana | logger=migrator t=2024-04-26T08:52:53.844224613Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-pap | exclude.internal.topics = true policy-db-migrator | > upgrade 0660-toscaparameter.sql kafka | [2024-04-26 08:52:59,735] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-26T08:52:53.845024608Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=799.925µs policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | -------------- kafka | [2024-04-26 08:52:59,737] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-26T08:52:53.848408376Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-26 08:52:59,740] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-26T08:52:53.849235841Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=827.385µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- kafka | [2024-04-26 08:52:59,739] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-26T08:52:53.853559141Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | kafka | [2024-04-26 08:52:59,792] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:53.853665065Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=109.755µs policy-db-migrator | policy-pap | fetch.min.bytes = 1 kafka | [2024-04-26 08:52:59,798] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:53.858309057Z level=info msg="Executing migration" id="create test_data table" policy-pap | group.id = policy-pap kafka | [2024-04-26 08:52:59,808] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-04-26T08:52:53.859317112Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.008305ms kafka | [2024-04-26 08:52:59,810] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-04-26 08:52:59,811] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.862869096Z level=info msg="Executing migration" id="create dashboard_version table v1" kafka | [2024-04-26 08:52:59,823] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-26 08:52:59,876] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.863710793Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=840.627µs policy-pap | interceptor.classes = [] policy-db-migrator | kafka | [2024-04-26 08:52:59,895] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) grafana | logger=migrator t=2024-04-26T08:52:53.867224297Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-pap | internal.leave.group.on.close = true policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql kafka | [2024-04-26 08:52:59,912] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-04-26T08:52:53.86823026Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.005023ms policy-db-migrator | -------------- kafka | [2024-04-26 08:52:59,976] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-26 08:53:00,328] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-04-26T08:52:53.87189226Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:00,364] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-26T08:52:53.872838442Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=945.202µs kafka | [2024-04-26 08:53:00,365] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.878863225Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-26 08:53:00,373] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-26T08:52:53.879065773Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=182.858µs policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-26 08:53:00,378] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-04-26T08:52:53.88266658Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-26 08:53:00,408] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.883014255Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=352.305µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-pap | max.poll.records = 500 kafka | [2024-04-26 08:53:00,411] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.886286718Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-26 08:53:00,412] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.88634959Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=63.352µs policy-db-migrator | policy-pap | metric.reporters = [] kafka | [2024-04-26 08:53:00,413] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.891016594Z level=info msg="Executing migration" id="create team table" policy-db-migrator | policy-pap | metrics.num.samples = 2 kafka | [2024-04-26 08:53:00,415] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.891732615Z level=info msg="Migration successfully executed" id="create team table" duration=715.951µs policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-pap | metrics.recording.level = INFO kafka | [2024-04-26 08:53:00,432] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-04-26T08:52:53.896280664Z level=info msg="Executing migration" id="add index team.org_id" policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-26 08:53:00,433] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) grafana | logger=migrator t=2024-04-26T08:52:53.897226965Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=946.441µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-26 08:53:00,461] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-26T08:52:53.900867425Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-26 08:53:00,488] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714121580475,1714121580475,1,0,0,72057613014925313,258,0,27 grafana | logger=migrator t=2024-04-26T08:52:53.901746853Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=879.348µs policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 kafka | (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-26T08:52:53.906426957Z level=info msg="Executing migration" id="Add column uid in team" policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-26 08:53:00,489] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-26T08:52:53.911065719Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.638162ms policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-pap | request.timeout.ms = 30000 kafka | [2024-04-26 08:53:00,684] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) grafana | logger=migrator t=2024-04-26T08:52:53.91520059Z level=info msg="Executing migration" id="Update uid column values in team" policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-04-26 08:53:00,691] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:53.915365807Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=164.857µs policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-26 08:53:00,698] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-04-26T08:52:53.945545423Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" kafka | [2024-04-26 08:53:00,698] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:52:53.946897632Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.351039ms policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-26 08:53:00,712] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-26T08:52:53.97916513Z level=info msg="Executing migration" id="create team member table" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-26 08:53:00,714] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.980317941Z level=info msg="Migration successfully executed" id="create team member table" duration=1.15329ms kafka | [2024-04-26 08:53:00,723] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-26T08:52:53.985487616Z level=info msg="Executing migration" id="add index team_member.org_id" policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-26 08:53:00,724] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-26T08:52:53.986312862Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=825.026µs policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-26 08:53:00,728] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-26 08:53:00,733] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:53.98994508Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | sasl.login.class = null kafka | [2024-04-26 08:53:00,745] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.991261058Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.314448ms policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-26 08:53:00,749] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:53.995397648Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-26 08:53:00,749] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | > upgrade 0730-toscaproperty.sql grafana | logger=migrator t=2024-04-26T08:52:53.996825061Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.426963ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-26 08:53:00,764] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.00117222Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-26 08:53:00,764] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-26T08:52:54.005663186Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.492146ms policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-26 08:53:00,774] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.00919684Z level=info msg="Executing migration" id="Add column external to team_member table" kafka | [2024-04-26 08:53:00,778] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:52:54.013739609Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.542929ms kafka | [2024-04-26 08:53:00,781] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:54.017699421Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-26 08:53:00,785] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-26T08:52:54.0222478Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.543409ms policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-26 08:53:00,802] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.026952866Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-26 08:53:00,808] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-26 08:53:00,810] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-04-26T08:52:54.027893806Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=940.541µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-26 08:53:00,814] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) grafana | logger=migrator t=2024-04-26T08:52:54.031606488Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-26 08:53:00,823] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-26T08:52:54.032466876Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=859.718µs policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:53:00,824] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-04-26T08:52:54.03668849Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-26 08:53:00,825] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.037660682Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=972.082µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-26 08:53:00,826] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.042398709Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-26 08:53:00,827] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.043935756Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.533627ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-26 08:53:00,827] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.047583255Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-26 08:53:00,828] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-04-26T08:52:54.048991426Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.407581ms policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-26 08:53:00,829] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | [2024-04-26 08:53:00,830] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.053596167Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-pap | security.providers = null policy-db-migrator | -------------- kafka | [2024-04-26 08:53:00,830] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-04-26T08:52:54.054521497Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=924.77µs policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-26 08:53:00,830] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.059722554Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:00,830] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) grafana | logger=migrator t=2024-04-26T08:52:54.061490882Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.767598ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | [2024-04-26 08:53:00,831] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.066550783Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | [2024-04-26 08:53:00,834] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.068117451Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.565858ms policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-04-26 08:53:00,842] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-26T08:52:54.073332888Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | [2024-04-26 08:53:00,842] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-26T08:52:54.073803389Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=471.081µs policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-04-26 08:53:00,842] INFO Kafka startTimeMs: 1714121580837 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-26T08:52:54.080486811Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- kafka | [2024-04-26 08:53:00,843] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-26T08:52:54.080746432Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=260.251µs policy-db-migrator | kafka | [2024-04-26 08:53:00,848] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:52:54.084976806Z level=info msg="Executing migration" id="create tag table" policy-db-migrator | kafka | [2024-04-26 08:53:00,848] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:52:54.08618761Z level=info msg="Migration successfully executed" id="create tag table" duration=1.212314ms policy-db-migrator | > upgrade 0780-toscarequirements.sql kafka | [2024-04-26 08:53:00,853] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:52:54.091598596Z level=info msg="Executing migration" id="add index tag.key_value" policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:52:54.093131332Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.533716ms kafka | [2024-04-26 08:53:00,853] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:52:54.097395179Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-04-26 08:53:00,853] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-26T08:52:54.098295328Z level=info msg="Migration successfully executed" id="create login attempt table" duration=897.119µs kafka | [2024-04-26 08:53:00,854] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-26T08:52:54.101971788Z level=info msg="Executing migration" id="add index login_attempt.username" kafka | [2024-04-26 08:53:00,857] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-26T08:52:54.102906779Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=927.061µs kafka | [2024-04-26 08:53:00,857] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-26T08:52:54.1079885Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" kafka | [2024-04-26 08:53:00,868] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-26T08:52:54.109094699Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.106429ms kafka | [2024-04-26 08:53:00,868] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-26T08:52:54.112416304Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-04-26 08:53:00,868] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:52:54.130624138Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=18.204515ms kafka | [2024-04-26 08:53:00,868] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:52:54.135088533Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-04-26 08:53:00,869] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-26T08:52:54.135853606Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=765.043µs kafka | [2024-04-26 08:53:00,870] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-26T08:52:54.138754622Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" kafka | [2024-04-26 08:53:00,885] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | grafana | logger=migrator t=2024-04-26T08:52:54.140856844Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=2.101192ms kafka | [2024-04-26 08:53:00,953] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-26T08:52:54.144536735Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" kafka | [2024-04-26 08:53:00,985] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-26T08:52:54.145000946Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=464.88µs kafka | [2024-04-26 08:53:00,992] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121614168 grafana | logger=migrator t=2024-04-26T08:52:54.148619033Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" kafka | [2024-04-26 08:53:05,887] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-26 08:53:05,888] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-04-26T08:52:54.149226599Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=607.846µs policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-04-26 08:53:34,708] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.152698571Z level=info msg="Executing migration" id="create user auth table" policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cecb7d84-0274-42c7-b3cd-cefaad5f8f13, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-26 08:53:34,709] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-26T08:52:54.153286847Z level=info msg="Migration successfully executed" id="create user auth table" duration=587.716µs policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2be2c80-205d-4227-951f-9a7c12c2d5ee, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-26 08:53:34,712] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.157443378Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | [2024-04-26T08:53:34.168+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=396c773d-1fdc-4e6e-b2c1-c73ec689b5af, alive=false, publisher=null]]: starting kafka | [2024-04-26 08:53:34,723] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.158352467Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=904.689µs policy-pap | [2024-04-26T08:53:34.183+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | kafka | [2024-04-26 08:53:34,760] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(LsF0wvMZRGucbuSK9bj6lg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(buULhLhhTIOJjtnuKy1oCQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.161327398Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | acks = -1 policy-db-migrator | > upgrade 0820-toscatrigger.sql kafka | [2024-04-26 08:53:34,765] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:52:54.161394921Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=68.193µs policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,769] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:52:54.164534128Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-04-26 08:53:34,769] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-26T08:52:54.170578521Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.044233ms kafka | [2024-04-26 08:53:34,769] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.174330725Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" kafka | [2024-04-26 08:53:34,769] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | buffer.memory = 33554432 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.182738622Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.407086ms kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.186206883Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.id = producer-1 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-26T08:52:54.189733967Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.523934ms kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | compression.type = none policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.192840243Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) grafana | logger=migrator t=2024-04-26T08:52:54.197897563Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.05641ms kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.20218271Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | enable.idempotence = true policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.203046887Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=863.837µs kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | interceptor.classes = [] policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.206243388Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" kafka | [2024-04-26 08:53:34,770] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-26T08:52:54.211253375Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.009987ms kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | linger.ms = 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.215989282Z level=info msg="Executing migration" id="create server_lock table" kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-04-26T08:52:54.216748535Z level=info msg="Migration successfully executed" id="create server_lock table" duration=759.403µs kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.221164618Z level=info msg="Executing migration" id="add index server_lock.operation_uid" kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.request.size = 1048576 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.222053247Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=888.359µs kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.225620802Z level=info msg="Executing migration" id="create user auth token table" kafka | [2024-04-26 08:53:34,771] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-04-26T08:52:54.227021604Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.400092ms kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.233087228Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-04-26T08:52:54.2347146Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.627561ms kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.239399053Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.241027445Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.628212ms kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.244562979Z level=info msg="Executing migration" id="add index user_auth_token.user_id" kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-26T08:52:54.245578954Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.015355ms kafka | [2024-04-26 08:53:34,772] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.249058695Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | partitioner.ignore.keys = false kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-26T08:52:54.254650779Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.591644ms kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:52:54.258916765Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-26T08:52:54.259973332Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.056846ms kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:54.264347102Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-04-26T08:52:54.265815276Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.468024ms kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:54.271819088Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:52:54.272966288Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.15243ms kafka | [2024-04-26 08:53:34,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:52:54.280245906Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-26T08:52:54.281066131Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=820.035µs kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-26T08:52:54.285086477Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-26T08:52:54.28584588Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=759.573µs kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:52:54.290372168Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:52:54.29042426Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=51.902µs kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:52:54.296733855Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-26T08:52:54.296802298Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=68.133µs kafka | [2024-04-26 08:53:34,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.299998517Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-26T08:52:54.30098734Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=985.463µs kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-26T08:52:54.306814864Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-26T08:52:54.307851Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.037066ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:52:54.314168336Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:52:54.315147658Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=978.752µs policy-db-migrator | kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:54.318353539Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-db-migrator | kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:54.318424432Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=71.243µs policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-04-26 08:53:34,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-26T08:52:54.322158024Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,776] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-26T08:52:54.325514841Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=3.356456ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) kafka | [2024-04-26 08:53:34,776] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-26T08:52:54.342864767Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-26T08:52:54.344256089Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.396971ms policy-db-migrator | kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-26T08:52:54.34955195Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-db-migrator | kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:54.350524932Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=972.952µs policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:52:54.355568212Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:54.356864408Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.297056ms policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) kafka | [2024-04-26 08:53:34,789] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:54.363034198Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,790] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-26T08:52:54.372609355Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.574927ms policy-db-migrator | kafka | [2024-04-26 08:53:34,790] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:54.37845118Z level=info msg="Executing migration" id="drop alert_definition table" policy-db-migrator | kafka | [2024-04-26 08:53:34,790] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:52:54.379663053Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.211213ms policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:52:54.385580612Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-26T08:52:54.385787171Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=217.75µs policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:54.392769525Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:54.39425693Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.490045ms policy-db-migrator | kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:52:54.491168258Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-26 08:53:34,791] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.492926825Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.753677ms policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.496694199Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.497931943Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.238244ms policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | ssl.key.password = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.54784942Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.54804044Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=195.639µs policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.555125908Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.55677122Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.653892ms policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | ssl.keystore.location = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.565845156Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.567264657Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.419361ms policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.571761904Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.57440891Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=2.551761ms policy-pap | ssl.provider = null kafka | [2024-04-26 08:53:34,792] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.58108101Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:53:34,793] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-26T08:52:54.582485842Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.405342ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-26 08:53:34,793] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.588184971Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-26 08:53:34,793] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-26T08:52:54.59529565Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.105899ms policy-pap | ssl.truststore.location = null kafka | [2024-04-26 08:53:34,793] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:54.606042289Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | ssl.truststore.password = null kafka | [2024-04-26 08:53:34,793] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.606910138Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=869.959µs policy-pap | ssl.truststore.type = JKS kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:54.610936483Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-26T08:52:54.611629003Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=692µs kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.662247291Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | transactional.id = null policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.690159969Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.914648ms policy-pap | policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.694257689Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | [2024-04-26T08:53:34.193+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.719980101Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.721602ms policy-pap | [2024-04-26T08:53:34.207+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | kafka | [2024-04-26 08:53:34,794] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.731823207Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | [2024-04-26T08:53:34.207+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.733675538Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.852771ms policy-pap | [2024-04-26T08:53:34.207+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121614207 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.7378568Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | [2024-04-26T08:53:34.208+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=396c773d-1fdc-4e6e-b2c1-c73ec689b5af, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.739616077Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.759937ms policy-pap | [2024-04-26T08:53:34.208+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad4db956-3eff-4eda-8528-10b044261a2f, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.748099427Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | [2024-04-26T08:53:34.208+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.754560719Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.463072ms policy-pap | acks = -1 policy-db-migrator | kafka | [2024-04-26 08:53:34,795] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.770299486Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-04-26 08:53:34,796] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.775535935Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.236578ms policy-pap | batch.size = 16384 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,796] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.780156255Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:53:34,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.780884358Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=728.383µs policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.789082636Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | [2024-04-26 08:53:34,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.790057318Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=974.832µs policy-pap | client.id = producer-2 policy-db-migrator | kafka | [2024-04-26 08:53:34,800] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.799330932Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-04-26 08:53:34,800] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.800997265Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.665603ms policy-pap | compression.type = none policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.81097646Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.811836278Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=860.248µs policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-26T08:52:54.822915372Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-26T08:52:54.823015696Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=102.234µs kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-26T08:52:54.830163347Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-26T08:52:54.839309507Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.14625ms kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-26T08:52:54.845742117Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-04-26 08:53:34,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-26T08:52:54.851648685Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.904718ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-26T08:52:54.855236441Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-26T08:52:54.861046945Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.809424ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-26T08:52:54.867326759Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-26T08:52:54.868365224Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.039115ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-26T08:52:54.874254441Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-26T08:52:54.875335568Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.078557ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-26T08:52:54.879934009Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:54.888025742Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.073822ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-26T08:52:54.894559517Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-26T08:52:54.900501676Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.942119ms kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-26T08:52:54.903357531Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | partitioner.ignore.keys = false kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.904865137Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.507017ms policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.911626841Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.92074814Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.116289ms policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.956766041Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-db-migrator | policy-pap | request.timeout.ms = 30000 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.96613595Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=9.37067ms policy-db-migrator | policy-pap | retries = 2147483647 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.970973201Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | retry.backoff.ms = 100 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.971067875Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=96.124µs policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.97530227Z level=info msg="Executing migration" id="create alert_rule_version table" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:52:54.977092868Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.786749ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.984314623Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-26 08:53:34,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.985440882Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.125809ms policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.992466908Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:54.994275648Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.807979ms policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.000325761Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.000419645Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=93.624µs policy-pap | sasl.login.class = null kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.007337528Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.016684813Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.345545ms policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.021194182Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:52:55.025617809Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.420787ms grafana | logger=migrator t=2024-04-26T08:52:55.031549812Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.0398474Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.293907ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.044442744Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.048810907Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.367773ms policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.05383376Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.060571229Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.734219ms policy-db-migrator | > upgrade 0100-pdp.sql policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.066387947Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.066484621Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=97.074µs policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.070467508Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.071771116Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.303719ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-26 08:53:34,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.077234927Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-04-26 08:53:34,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:55.083616431Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.383654ms kafka | [2024-04-26 08:53:34,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-26T08:52:55.088532588Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-04-26 08:53:34,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,008] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-26T08:52:55.088625553Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=93.005µs policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-04-26 08:53:35,008] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:52:55.094969994Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,009] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:52:55.10526137Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.285766ms policy-db-migrator | kafka | [2024-04-26 08:53:35,009] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:52:55.112156286Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-04-26 08:53:35,009] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:52:55.112894989Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=738.303µs kafka | [2024-04-26 08:53:35,009] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:52:55.119422849Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-04-26 08:53:35,009] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:52:55.12938938Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.963721ms kafka | [2024-04-26 08:53:35,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-26T08:52:55.134986828Z level=info msg="Executing migration" id=create_ngalert_configuration_table kafka | [2024-04-26 08:53:35,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:52:55.135565344Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=578.846µs kafka | [2024-04-26 08:53:35,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | policy-pap | ssl.key.password = null kafka | [2024-04-26 08:53:35,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:52:55.140533224Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | kafka | [2024-04-26 08:53:35,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:52:55.142336275Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.801011ms policy-db-migrator | > upgrade 0130-pdpstatistics.sql kafka | [2024-04-26 08:53:35,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:52:55.146868485Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-04-26 08:53:35,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.153774602Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.906477ms kafka | [2024-04-26 08:53:35,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL kafka | [2024-04-26 08:53:35,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-26T08:52:55.158083482Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-26T08:52:55.158863518Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=779.806µs policy-db-migrator | kafka | [2024-04-26 08:53:35,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-26T08:52:55.16793071Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:53:35,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.169989621Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.058352ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num kafka | [2024-04-26 08:53:35,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.177051874Z level=info msg="Executing migration" id="create alert_image table" policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.178334041Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.281647ms policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | [2024-04-26 08:53:35,013] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.185129982Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,013] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.186760685Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.630753ms policy-pap | ssl.truststore.type = JKS policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-04-26 08:53:35,013] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.191927143Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,013] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.192021617Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=95.114µs policy-pap | transactional.id = null policy-db-migrator | kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.201251637Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.202769244Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.517787ms policy-pap | policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.207321516Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-pap | [2024-04-26T08:53:34.209+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.208971649Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.651023ms policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.213284291Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714121614212 kafka | [2024-04-26 08:53:35,014] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.21372683Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad4db956-3eff-4eda-8528-10b044261a2f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-26T08:52:55.222208936Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.222940508Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=730.732µs policy-pap | [2024-04-26T08:53:34.212+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-04-26T08:52:55.231225096Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-pap | [2024-04-26T08:53:34.215+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.232298184Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.073368ms policy-pap | [2024-04-26T08:53:34.215+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.237688183Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-pap | [2024-04-26T08:53:34.217+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.244413611Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.725148ms policy-pap | [2024-04-26T08:53:34.218+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-26T08:52:55.250234389Z level=info msg="Executing migration" id="create library_element table v1" policy-pap | [2024-04-26T08:53:34.218+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.251330838Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.096519ms policy-pap | [2024-04-26T08:53:34.234+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.258896683Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-pap | [2024-04-26T08:53:34.236+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | JOIN pdpstatistics b kafka | [2024-04-26 08:53:35,015] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.260060245Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.157762ms policy-pap | [2024-04-26T08:53:34.235+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.264825476Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-pap | [2024-04-26T08:53:34.239+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | SET a.id = b.id kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.265672474Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=846.778µs policy-pap | [2024-04-26T08:53:34.241+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.827 seconds (process running for 10.427) policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-04-26T08:53:34.694+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.269233051Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-04-26T08:53:34.695+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Cluster ID: rK8eMBuCRaO0vWITIr3dSg policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.270353011Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.11958ms kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-04-26T08:53:34.695+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: rK8eMBuCRaO0vWITIr3dSg policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-26T08:52:55.274079717Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-04-26T08:53:34.696+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: rK8eMBuCRaO0vWITIr3dSg policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.275152404Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.072407ms kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-04-26T08:53:34.749+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-04-26T08:52:55.280758472Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-04-26T08:53:34.749+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: rK8eMBuCRaO0vWITIr3dSg grafana | logger=migrator t=2024-04-26T08:52:55.280790074Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.592µs kafka | [2024-04-26 08:53:35,016] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-04-26T08:53:34.798+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.285000541Z level=info msg="Executing migration" id="alter library_element model to mediumtext" kafka | [2024-04-26 08:53:35,017] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-04-26T08:53:34.819+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.285132757Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=131.826µs kafka | [2024-04-26 08:53:35,020] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | [2024-04-26T08:53:34.891+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.289442247Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-04-26 08:53:35,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:34.924+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-04-26T08:52:55.289940669Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=498.042µs kafka | [2024-04-26 08:53:35,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:34.942+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.295222404Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-04-26 08:53:35,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.001+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-04-26T08:52:55.296857557Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.634322ms kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.051+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.118+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.169+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.304572749Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.228+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.305473109Z level=info msg="Migration successfully executed" id="create secrets table" duration=901.079µs policy-db-migrator | kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.276+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.30912068Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.333+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.339824602Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.702911ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.34428979Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.385+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:35.439+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.349340184Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.049684ms kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.498+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.356063692Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-04-26 08:53:35,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:35.545+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.356192057Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=127.065µs kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:35.605+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-04-26T08:52:55.361503823Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.648+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:35.713+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.388335223Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=26.82934ms policy-pap | [2024-04-26T08:53:35.753+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.473212267Z level=info msg="Executing migration" id="rename data_keys id column back to name" kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-04-26T08:53:35.821+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:52:55.507017016Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.805689ms policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:35.828+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.552386317Z level=info msg="Executing migration" id="create kv_store table v1" policy-pap | [2024-04-26T08:53:35.838+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] (Re-)joining group kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.553270326Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=884.489µs kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Request joining group due to: need to re-join with the given member-id: consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.558400504Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.876+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-04-26T08:52:55.559545525Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.143431ms kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.876+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.567429104Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-04-26 08:53:35,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.876+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-04-26T08:52:55.567919867Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=490.932µs policy-pap | [2024-04-26T08:53:35.880+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.574683976Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.892+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13 grafana | logger=migrator t=2024-04-26T08:52:55.5761053Z level=info msg="Migration successfully executed" id="create permission table" duration=1.420533ms kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.893+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.634677526Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:35.893+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.636896355Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.224728ms kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.908+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74', protocol='range'} policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-26T08:52:55.642972784Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.644086024Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.11296ms kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.922+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-04-26T08:52:55.648718429Z level=info msg="Executing migration" id="create role table" kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.922+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Finished assignment for group at generation 1: {consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.650195865Z level=info msg="Migration successfully executed" id="create role table" duration=1.476936ms kafka | [2024-04-26 08:53:35,025] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.657160793Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.667602826Z level=info msg="Migration successfully executed" id="add column display_name" duration=10.448693ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-26T08:52:55.672931553Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.951+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.679696133Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.76071ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-04-26T08:52:55.686382409Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:38.959+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.688026482Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.641132ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:38.979+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-04-26T08:52:55.696268297Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:38.981+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-04-26T08:52:55.698228144Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.959237ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | [2024-04-26T08:53:38.999+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3, groupId=c2be2c80-205d-4227-951f-9a7c12c2d5ee] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-04-26T08:52:55.704255061Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:39.002+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-04-26T08:52:55.705445685Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.191324ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | [2024-04-26T08:53:40.349+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-5] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-26T08:52:55.711726163Z level=info msg="Executing migration" id="create team role table" kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:40.349+00:00|INFO|DispatcherServlet|http-nio-6969-exec-5] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-26T08:52:55.713051741Z level=info msg="Migration successfully executed" id="create team role table" duration=1.322548ms kafka | [2024-04-26 08:53:35,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:40.350+00:00|INFO|DispatcherServlet|http-nio-6969-exec-5] Completed initialization in 1 ms grafana | logger=migrator t=2024-04-26T08:52:55.721484865Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.158+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-26T08:52:55.723321977Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.836042ms kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [] grafana | logger=migrator t=2024-04-26T08:52:55.734452191Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-26T08:53:56.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.736182668Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.731917ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:55.741994496Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9071b0a9-a991-4e9b-80ca-5faa4ea251c7","timestampMs":1714121636117,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup"} kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.160+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:55.743176087Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.181092ms kafka | [2024-04-26 08:53:35,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9071b0a9-a991-4e9b-80ca-5faa4ea251c7","timestampMs":1714121636117,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-26T08:52:55.750382507Z level=info msg="Executing migration" id="create user role table" policy-db-migrator | kafka | [2024-04-26 08:53:35,028] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-04-26T08:53:56.167+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-04-26T08:52:55.75134146Z level=info msg="Migration successfully executed" id="create user role table" duration=959.732µs policy-db-migrator | kafka | [2024-04-26 08:53:35,028] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-04-26T08:53:56.238+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting grafana | logger=migrator t=2024-04-26T08:52:55.758159613Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.238+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting listener grafana | logger=migrator t=2024-04-26T08:52:55.760048276Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.884643ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.239+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting timer grafana | logger=migrator t=2024-04-26T08:52:55.767092918Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-04-26T08:53:56.240+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f5c6f62f-8e3e-4f65-89bc-4c3464718b45, expireMs=1714121666240] grafana | logger=migrator t=2024-04-26T08:52:55.768241459Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.148001ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.241+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting enqueue grafana | logger=migrator t=2024-04-26T08:52:55.777012908Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.241+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=f5c6f62f-8e3e-4f65-89bc-4c3464718b45, expireMs=1714121666240] grafana | logger=migrator t=2024-04-26T08:52:55.77886246Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.849392ms policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.243+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.786577033Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","timestampMs":1714121636220,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.78743818Z level=info msg="Migration successfully executed" id="create builtin role table" duration=857.808µs policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.243+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate started grafana | logger=migrator t=2024-04-26T08:52:55.793924518Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.276+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.795803831Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.872773ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","timestampMs":1714121636220,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.802619983Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.277+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-26T08:52:55.803982994Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.363911ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.280+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:55.810291084Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","timestampMs":1714121636220,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.818640584Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.34892ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.281+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-26T08:52:55.826416409Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.290+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.827601361Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.183202ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87746640-debc-4467-b40b-24ebe64c2235","timestampMs":1714121636281,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.834266667Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.291+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-04-26T08:52:55.836232634Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.965207ms policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.299+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:55.844817144Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87746640-debc-4467-b40b-24ebe64c2235","timestampMs":1714121636281,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.846569602Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.753678ms policy-db-migrator | kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.300+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.850459204Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"272d55b7-aa0f-4aa3-9199-323c887dccf3","timestampMs":1714121636284,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-04-26T08:52:55.851876868Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.418184ms kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.300+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.858938111Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.301+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping enqueue policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.301+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping timer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.859856252Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=916.941µs kafka | [2024-04-26 08:53:35,030] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.301+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f5c6f62f-8e3e-4f65-89bc-4c3464718b45, expireMs=1714121666240] policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:55.864567121Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.301+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping listener policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:55.866257935Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.689754ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.302+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopped policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=migrator t=2024-04-26T08:52:55.872671459Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.310+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate successful grafana | logger=migrator t=2024-04-26T08:52:55.884489264Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.817225ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.310+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 start publishing next request grafana | logger=migrator t=2024-04-26T08:52:55.890537602Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.310+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange starting grafana | logger=migrator t=2024-04-26T08:52:55.899751391Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.209688ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | [2024-04-26T08:53:56.310+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange starting listener grafana | logger=migrator t=2024-04-26T08:52:55.903805331Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.310+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange starting timer grafana | logger=migrator t=2024-04-26T08:52:55.909403589Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.600228ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-04-26T08:53:56.311+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=b3f5aba9-960f-4f53-9f6a-46c1f7b5d673, expireMs=1714121666311] grafana | logger=migrator t=2024-04-26T08:52:55.929736631Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.311+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange starting enqueue grafana | logger=migrator t=2024-04-26T08:52:55.939582427Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.845997ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.311+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange started grafana | logger=migrator t=2024-04-26T08:52:55.94822956Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.311+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=b3f5aba9-960f-4f53-9f6a-46c1f7b5d673, expireMs=1714121666311] grafana | logger=migrator t=2024-04-26T08:52:55.949526418Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.296538ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-pap | [2024-04-26T08:53:56.314+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.956749988Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","timestampMs":1714121636221,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.958836281Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.086612ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.376+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.965048446Z level=info msg="Executing migration" id="remove permission role_id action scope index" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","timestampMs":1714121636221,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.966588455Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.539448ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-pap | [2024-04-26T08:53:56.376+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-04-26T08:52:55.971999095Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.381+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:55.973464959Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.463065ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"278d5e40-e6f4-489b-b28a-d250a3fd93a9","timestampMs":1714121636326,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:55.979869464Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange stopping grafana | logger=migrator t=2024-04-26T08:52:55.981164591Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.298787ms kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange stopping enqueue grafana | logger=migrator t=2024-04-26T08:52:55.986872034Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange stopping timer grafana | logger=migrator t=2024-04-26T08:52:55.986981589Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=106.294µs kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=b3f5aba9-960f-4f53-9f6a-46c1f7b5d673, expireMs=1714121666311] grafana | logger=migrator t=2024-04-26T08:52:55.991568092Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange stopping listener grafana | logger=migrator t=2024-04-26T08:52:55.991624885Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=127.866µs kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange stopped grafana | logger=migrator t=2024-04-26T08:52:55.99917248Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpStateChange successful grafana | logger=migrator t=2024-04-26T08:52:56.000109711Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=937.461µs kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 start publishing next request grafana | logger=migrator t=2024-04-26T08:52:56.006442652Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | msg policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting grafana | logger=migrator t=2024-04-26T08:52:56.007427256Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=985.685µs kafka | [2024-04-26 08:53:35,031] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | upgrade to 1100 completed policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting listener grafana | logger=migrator t=2024-04-26T08:52:56.013136239Z level=info msg="Executing migration" id="dashboard permissions uid scopes" kafka | [2024-04-26 08:53:35,032] DEBUG [Controller id=1] Read current producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000), Zk path version 1 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting timer grafana | logger=migrator t=2024-04-26T08:52:56.01429774Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.162171ms kafka | [2024-04-26 08:53:35,035] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=1000, size=1000) by writing to Zk with path version 2 (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=d504a7a0-8924-4423-91ec-dde2e6acb62c, expireMs=1714121666398] grafana | logger=migrator t=2024-04-26T08:52:56.019336964Z level=info msg="Executing migration" id="drop managed folder create actions" kafka | [2024-04-26 08:53:35,070] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate starting enqueue grafana | logger=migrator t=2024-04-26T08:52:56.019807754Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=470.49µs kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate started grafana | logger=migrator t=2024-04-26T08:52:56.024202199Z level=info msg="Executing migration" id="alerting notification permissions" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.398+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:56.024765265Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=562.905µs kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d504a7a0-8924-4423-91ec-dde2e6acb62c","timestampMs":1714121636366,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:56.030558051Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.402+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:56.031993834Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.435083ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f5c6f62f-8e3e-4f65-89bc-4c3464718b45","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"272d55b7-aa0f-4aa3-9199-323c887dccf3","timestampMs":1714121636284,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:56.036648491Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.402+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f5c6f62f-8e3e-4f65-89bc-4c3464718b45 grafana | logger=migrator t=2024-04-26T08:52:56.037777411Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.12896ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-pap | [2024-04-26T08:53:56.405+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:56.042343804Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","timestampMs":1714121636221,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:56.050417752Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.072778ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:53:56.405+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-04-26T08:52:56.057523486Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.405+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:52:56.057586419Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=63.543µs kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b3f5aba9-960f-4f53-9f6a-46c1f7b5d673","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"278d5e40-e6f4-489b-b28a-d250a3fd93a9","timestampMs":1714121636326,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:56.066978116Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:53:56.406+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:52:56.068230941Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.252285ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d504a7a0-8924-4423-91ec-dde2e6acb62c","timestampMs":1714121636366,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:52:56.075793447Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.076909056Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.115839ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-04-26T08:53:56.406+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id b3f5aba9-960f-4f53-9f6a-46c1f7b5d673 policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=migrator t=2024-04-26T08:52:56.082065265Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-04-26T08:53:56.406+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.083831304Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.765608ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-04-26T08:53:56.409+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-26T08:52:56.088906759Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | {"source":"pap-bcd81757-1fa3-469d-bcb3-86a23a71bea1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d504a7a0-8924-4423-91ec-dde2e6acb62c","timestampMs":1714121636366,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.097697309Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.791179ms kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-04-26T08:53:56.410+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.102331333Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-04-26T08:53:56.415+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.103126279Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=792.496µs kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d504a7a0-8924-4423-91ec-dde2e6acb62c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c39081f3-1dfd-48f5-8850-b37b33cc4fde","timestampMs":1714121636408,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-04-26T08:52:56.112149419Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | [2024-04-26T08:53:56.415+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d504a7a0-8924-4423-91ec-dde2e6acb62c kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.113868996Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.719257ms policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.118371695Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d504a7a0-8924-4423-91ec-dde2e6acb62c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c39081f3-1dfd-48f5-8850-b37b33cc4fde","timestampMs":1714121636408,"name":"apex-dc1391b6-addb-4085-8ebc-9ab258599529","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.141081082Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.710137ms policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=migrator t=2024-04-26T08:52:56.146751383Z level=info msg="Executing migration" id="create correlation v2" policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping enqueue kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.147627643Z level=info msg="Migration successfully executed" id="create correlation v2" duration=875.35µs policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping timer kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-26T08:52:56.157625496Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d504a7a0-8924-4423-91ec-dde2e6acb62c, expireMs=1714121666398] kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.159420466Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.79439ms policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopping listener kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.162887659Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | [2024-04-26T08:53:56.417+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate stopped kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.165647741Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.751922ms policy-pap | [2024-04-26T08:53:56.421+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 PdpUpdate successful kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-04-26T08:52:56.173231448Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | [2024-04-26T08:53:56.421+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dc1391b6-addb-4085-8ebc-9ab258599529 has no more requests kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.17643496Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=3.208992ms policy-pap | [2024-04-26T08:54:00.804+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.182707948Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-pap | [2024-04-26T08:54:00.852+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.182996691Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=290.132µs policy-pap | [2024-04-26T08:54:00.862+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-04-26T08:52:56.190913462Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-pap | [2024-04-26T08:54:00.863+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.192516073Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.608112ms policy-pap | [2024-04-26T08:54:01.273+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.196659517Z level=info msg="Executing migration" id="add provisioning column" policy-pap | [2024-04-26T08:54:01.794+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.205368083Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.708236ms policy-pap | [2024-04-26T08:54:01.795+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-04-26T08:52:56.211826969Z level=info msg="Executing migration" id="create entity_events table" policy-pap | [2024-04-26T08:54:02.296+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.212658636Z level=info msg="Migration successfully executed" id="create entity_events table" duration=831.767µs policy-pap | [2024-04-26T08:54:02.482+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-26T08:52:56.219941389Z level=info msg="Executing migration" id="create dashboard public config v1" policy-pap | [2024-04-26T08:54:02.581+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.221157923Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.217695ms policy-pap | [2024-04-26T08:54:02.581+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.230742038Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-04-26T08:54:02.582+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.231444349Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-04-26T08:54:02.594+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:54:02Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:54:02Z, user=policyadmin)] kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-04-26T08:52:56.239547838Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | [2024-04-26T08:54:03.285+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.240037611Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | [2024-04-26T08:54:03.286+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.244693726Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-pap | [2024-04-26T08:54:03.286+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.245586757Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=892.401µs policy-pap | [2024-04-26T08:54:03.286+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-04-26T08:54:03.287+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup kafka | [2024-04-26 08:53:35,071] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:56.249578203Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | -------------- kafka | [2024-04-26 08:53:35,073] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-04-26T08:52:56.251398534Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.824061ms policy-pap | [2024-04-26T08:54:03.362+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:54:03Z, user=policyadmin)] policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-04-26 08:53:35,073] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:56.259417479Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-04-26T08:54:03.716+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.260800991Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.383712ms kafka | [2024-04-26 08:53:35,134] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:54:03.717+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.268039542Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-04-26 08:53:35,145] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:54:03.717+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.269964607Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.923845ms kafka | [2024-04-26 08:53:35,147] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:54:03.717+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-04-26T08:52:56.298611197Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-26 08:53:35,148] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:54:03.717+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.300597235Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.996819ms kafka | [2024-04-26 08:53:35,150] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:54:03.717+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-04-26T08:52:56.305048453Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-26 08:53:35,175] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:54:03.748+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:54:03Z, user=policyadmin)] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:52:56.306261867Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.213813ms kafka | [2024-04-26 08:53:35,176] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:54:24.322+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:52:56.310811949Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-04-26 08:53:35,176] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:54:24.325+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-04-26T08:52:56.312155838Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.343479ms kafka | [2024-04-26 08:53:35,176] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:54:26.241+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f5c6f62f-8e3e-4f65-89bc-4c3464718b45, expireMs=1714121666240] policy-db-migrator | name version grafana | logger=migrator t=2024-04-26T08:52:56.315754508Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-04-26 08:53:35,176] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:54:26.311+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=b3f5aba9-960f-4f53-9f6a-46c1f7b5d673, expireMs=1714121666311] policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-04-26T08:52:56.31762575Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.870992ms kafka | [2024-04-26 08:53:35,195] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-04-26T08:52:56.321765244Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-26 08:53:35,196] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:01 grafana | logger=migrator t=2024-04-26T08:52:56.322978888Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.212964ms kafka | [2024-04-26 08:53:35,196] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:01 grafana | logger=migrator t=2024-04-26T08:52:56.326402689Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-26 08:53:35,196] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:01 grafana | logger=migrator t=2024-04-26T08:52:56.328117456Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.713427ms kafka | [2024-04-26 08:53:35,196] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.346487591Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-04-26 08:53:35,212] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 kafka | [2024-04-26 08:53:35,213] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.34783004Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.343989ms kafka | [2024-04-26 08:53:35,213] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.391465995Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-04-26 08:53:35,213] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.417873736Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.410351ms kafka | [2024-04-26 08:53:35,214] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.422840236Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-04-26 08:53:35,224] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.431758012Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.916985ms kafka | [2024-04-26 08:53:35,227] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.437055817Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-04-26 08:53:35,227] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.443398968Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.343611ms kafka | [2024-04-26 08:53:35,227] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.453451804Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-04-26 08:53:35,227] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.454060821Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=617.588µs kafka | [2024-04-26 08:53:35,239] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.457381708Z level=info msg="Executing migration" id="add share column" kafka | [2024-04-26 08:53:35,240] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.467671484Z level=info msg="Migration successfully executed" id="add share column" duration=10.290236ms kafka | [2024-04-26 08:53:35,240] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.483644762Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-04-26 08:53:35,240] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.483943856Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=303.654µs kafka | [2024-04-26 08:53:35,240] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.5417778Z level=info msg="Executing migration" id="create file table" kafka | [2024-04-26 08:53:35,248] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.543488856Z level=info msg="Migration successfully executed" id="create file table" duration=1.713866ms kafka | [2024-04-26 08:53:35,248] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.553117863Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-04-26 08:53:35,248] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.554314746Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.195833ms kafka | [2024-04-26 08:53:35,249] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:02 grafana | logger=migrator t=2024-04-26T08:52:56.56520682Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-04-26 08:53:35,249] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.613924359Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=48.71823ms policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.627343775Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-04-26 08:53:35,256] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.629017438Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.665923ms kafka | [2024-04-26 08:53:35,257] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.640974529Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-04-26 08:53:35,257] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.642115209Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.14024ms kafka | [2024-04-26 08:53:35,257] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.661648696Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-04-26 08:53:35,257] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.661822753Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=180.418µs kafka | [2024-04-26 08:53:35,266] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.674854962Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-04-26 08:53:35,266] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.674922385Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=69.333µs kafka | [2024-04-26 08:53:35,266] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.686473797Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-04-26 08:53:35,266] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.687176128Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=699.74µs kafka | [2024-04-26 08:53:35,266] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.697759837Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-04-26 08:53:35,278] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.698101602Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=341.085µs kafka | [2024-04-26 08:53:35,278] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.705512361Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-04-26 08:53:35,278] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.706514435Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.001564ms kafka | [2024-04-26 08:53:35,278] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.715995225Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-04-26 08:53:35,278] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.724368266Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.373571ms kafka | [2024-04-26 08:53:35,286] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.729043714Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-04-26 08:53:35,286] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.72916877Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=125.306µs kafka | [2024-04-26 08:53:35,286] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:03 grafana | logger=migrator t=2024-04-26T08:52:56.733996464Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-04-26 08:53:35,286] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.735378365Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.380401ms kafka | [2024-04-26 08:53:35,286] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.742405457Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-04-26 08:53:35,294] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.743176561Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=779.665µs kafka | [2024-04-26 08:53:35,295] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.747931282Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-04-26 08:53:35,295] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.748152392Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=220.76µs kafka | [2024-04-26 08:53:35,295] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.751833285Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-04-26 08:53:35,295] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.752267804Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=441.78µs kafka | [2024-04-26 08:53:35,303] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.754914011Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-04-26 08:53:35,303] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.762307149Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.392288ms kafka | [2024-04-26 08:53:35,303] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.765278241Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-04-26 08:53:35,303] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.771940676Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.661925ms kafka | [2024-04-26 08:53:35,304] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.7790116Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-04-26 08:53:35,310] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.780231764Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.220694ms kafka | [2024-04-26 08:53:35,311] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.783107641Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-04-26 08:53:35,311] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.86064869Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.520828ms kafka | [2024-04-26 08:53:35,311] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.886184002Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-04-26 08:53:35,311] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.887890668Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.710946ms kafka | [2024-04-26 08:53:35,320] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.892120065Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-04-26 08:53:35,320] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:04 grafana | logger=migrator t=2024-04-26T08:52:56.893479486Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.357671ms kafka | [2024-04-26 08:53:35,320] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.901728372Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-04-26 08:53:35,320] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.930447795Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.714583ms kafka | [2024-04-26 08:53:35,321] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.938352196Z level=info msg="Executing migration" id="add origin column to seed_assignment" kafka | [2024-04-26 08:53:35,329] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 kafka | [2024-04-26 08:53:35,330] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.945070353Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.721457ms kafka | [2024-04-26 08:53:35,331] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.950126848Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" kafka | [2024-04-26 08:53:35,331] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.950485794Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=358.945µs kafka | [2024-04-26 08:53:35,331] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.955845421Z level=info msg="Executing migration" id="prevent seeding OnCall access" kafka | [2024-04-26 08:53:35,339] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.956190446Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=343.995µs kafka | [2024-04-26 08:53:35,340] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.962494366Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-04-26 08:53:35,340] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.962779599Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=286.743µs kafka | [2024-04-26 08:53:35,340] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.96801221Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-04-26 08:53:35,340] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.968270912Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=259.142µs kafka | [2024-04-26 08:53:35,345] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.972018879Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-04-26 08:53:35,345] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.972291411Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=273.062µs kafka | [2024-04-26 08:53:35,345] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.976319739Z level=info msg="Executing migration" id="create folder table" kafka | [2024-04-26 08:53:35,345] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.977557854Z level=info msg="Migration successfully executed" id="create folder table" duration=1.237475ms kafka | [2024-04-26 08:53:35,345] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.982426301Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-04-26 08:53:35,391] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.983863704Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.436624ms kafka | [2024-04-26 08:53:35,392] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.989015322Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-04-26 08:53:35,392] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:05 grafana | logger=migrator t=2024-04-26T08:52:56.991837467Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.822495ms kafka | [2024-04-26 08:53:35,392] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:56.999185144Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-04-26 08:53:35,392] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:56.999219815Z level=info msg="Migration successfully executed" id="Update folder title length" duration=36.291µs kafka | [2024-04-26 08:53:35,505] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.003001713Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-04-26 08:53:35,506] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.004469867Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.467225ms kafka | [2024-04-26 08:53:35,506] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.009242616Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-04-26 08:53:35,506] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.010398102Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.161717ms kafka | [2024-04-26 08:53:35,506] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.013856826Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-04-26 08:53:35,513] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.014771589Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=913.283µs kafka | [2024-04-26 08:53:35,514] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.01961296Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-04-26 08:53:35,514] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.020009118Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=396.339µs kafka | [2024-04-26 08:53:35,514] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240853010800u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.023157498Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:06 kafka | [2024-04-26 08:53:35,514] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.023373588Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=216.411µs policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:06 grafana | logger=migrator t=2024-04-26T08:52:57.02617193Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,551] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.027349697Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.177897ms kafka | [2024-04-26 08:53:35,552] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.035282482Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" kafka | [2024-04-26 08:53:35,552] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.036919901Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.637078ms kafka | [2024-04-26 08:53:35,552] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.041820463Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" kafka | [2024-04-26 08:53:35,553] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.042958537Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.134044ms kafka | [2024-04-26 08:53:35,560] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.046710575Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" kafka | [2024-04-26 08:53:35,561] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.047940903Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.234689ms kafka | [2024-04-26 08:53:35,561] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.051550024Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" kafka | [2024-04-26 08:53:35,561] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.055258041Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=3.708647ms kafka | [2024-04-26 08:53:35,561] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.05839196Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-04-26 08:53:35,569] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2604240853010900u 1 2024-04-26 08:53:07 grafana | logger=migrator t=2024-04-26T08:52:57.059404747Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.011748ms kafka | [2024-04-26 08:53:35,569] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,569] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.063289051Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,569] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.064606704Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.317443ms policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,569] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.067164666Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,576] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.068094399Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=929.003µs policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,577] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.070764616Z level=info msg="Executing migration" id="create signing_key table" policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,577] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.07148462Z level=info msg="Migration successfully executed" id="create signing_key table" duration=719.764µs policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:07 kafka | [2024-04-26 08:53:35,577] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.074091014Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,577] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.075223618Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.131604ms policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2604240853011000u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,583] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.078379878Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2604240853011100u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,584] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.079714591Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.334272ms policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2604240853011200u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,584] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.08582273Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2604240853011200u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,584] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.086722684Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=900.343µs policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2604240853011200u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,584] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.089895524Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2604240853011200u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,592] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.099851966Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.948921ms policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2604240853011300u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,592] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.104317728Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2604240853011300u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,593] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.105046982Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=736.644µs policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2604240853011300u 1 2024-04-26 08:53:08 kafka | [2024-04-26 08:53:35,593] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.108305777Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-04-26 08:53:35,593] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.109496834Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.193287ms kafka | [2024-04-26 08:53:35,598] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.113741465Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-04-26 08:53:35,599] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.114800636Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.05849ms kafka | [2024-04-26 08:53:35,599] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.117762616Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-26 08:53:35,599] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.118795775Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.032819ms grafana | logger=migrator t=2024-04-26T08:52:57.124113727Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-04-26 08:53:35,599] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.125272363Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.157495ms kafka | [2024-04-26 08:53:35,609] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.129198519Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-04-26 08:53:35,610] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.130752942Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.553384ms kafka | [2024-04-26 08:53:35,610] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.134031848Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-04-26 08:53:35,610] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.135190283Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.158014ms kafka | [2024-04-26 08:53:35,610] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.144503554Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-04-26 08:53:35,617] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.145391787Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=889.923µs kafka | [2024-04-26 08:53:35,618] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.148391659Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-04-26 08:53:35,618] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.148597959Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=206.08µs kafka | [2024-04-26 08:53:35,618] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.151654134Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-04-26 08:53:35,618] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.151707916Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=53.882µs kafka | [2024-04-26 08:53:35,628] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.154546661Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-04-26 08:53:35,629] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.161422087Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.875376ms kafka | [2024-04-26 08:53:35,629] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.16463325Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-04-26 08:53:35,629] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:52:57.17370651Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.0718ms kafka | [2024-04-26 08:53:35,629] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:52:57.176537254Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" kafka | [2024-04-26 08:53:35,639] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:52:57.176761735Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=224.001µs kafka | [2024-04-26 08:53:35,639] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:52:57.179532927Z level=info msg="migrations completed" performed=548 skipped=0 duration=5.143714788s grafana | logger=sqlstore t=2024-04-26T08:52:57.18615852Z level=info msg="Created default admin" user=admin kafka | [2024-04-26 08:53:35,640] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=sqlstore t=2024-04-26T08:52:57.186398762Z level=info msg="Created default organization" kafka | [2024-04-26 08:53:35,640] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=secrets t=2024-04-26T08:52:57.18993684Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-04-26 08:53:35,640] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=plugin.store t=2024-04-26T08:52:57.212682969Z level=info msg="Loading plugins..." kafka | [2024-04-26 08:53:35,648] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=local.finder t=2024-04-26T08:52:57.262659251Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-04-26 08:53:35,648] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=plugin.store t=2024-04-26T08:52:57.262696292Z level=info msg="Plugins loaded" count=55 duration=50.013753ms kafka | [2024-04-26 08:53:35,648] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=query_data t=2024-04-26T08:52:57.26538125Z level=info msg="Query Service initialization" kafka | [2024-04-26 08:53:35,649] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=live.push_http t=2024-04-26T08:52:57.269377719Z level=info msg="Live Push Gateway initialization" kafka | [2024-04-26 08:53:35,649] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.migration t=2024-04-26T08:52:57.274974144Z level=info msg=Starting kafka | [2024-04-26 08:53:35,655] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.migration t=2024-04-26T08:52:57.275630935Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-26 08:53:35,655] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:52:57.276310437Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-26 08:53:35,655] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,655] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,655] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,660] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,661] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,661] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,661] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,661] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(LsF0wvMZRGucbuSK9bj6lg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,668] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,668] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,668] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,668] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,668] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,675] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,675] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,675] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,675] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,675] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,682] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,682] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,682] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,682] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,682] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,687] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,688] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,688] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,688] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,688] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,693] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,693] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,693] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,693] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,693] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,700] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,700] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,700] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,700] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,700] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,708] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,709] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,710] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,710] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:52:57.277365958Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration t=2024-04-26T08:52:57.280277546Z level=info msg="Completed alerting migration" grafana | logger=ngalert.state.manager t=2024-04-26T08:52:57.30651126Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-04-26T08:52:57.308570958Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-04-26T08:52:57.310507491Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-04-26T08:52:57.386952797Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-04-26T08:52:57.386992849Z level=info msg="finished to provision alerting" grafana | logger=ngalert.state.manager t=2024-04-26T08:52:57.387472911Z level=info msg="Warming state cache for startup" grafana | logger=grafanaStorageLogger t=2024-04-26T08:52:57.388032378Z level=info msg="Storage starting" grafana | logger=ngalert.multiorg.alertmanager t=2024-04-26T08:52:57.38829207Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.state.manager t=2024-04-26T08:52:57.388547703Z level=info msg="State cache has been initialized" states=0 duration=1.071842ms grafana | logger=ngalert.scheduler t=2024-04-26T08:52:57.388643768Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-04-26T08:52:57.388778954Z level=info msg=starting first_tick=2024-04-26T08:53:00Z grafana | logger=http.server t=2024-04-26T08:52:57.400037228Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafana.update.checker t=2024-04-26T08:52:57.5185448Z level=info msg="Update check succeeded" duration=129.581017ms grafana | logger=plugins.update.checker t=2024-04-26T08:52:57.528013669Z level=info msg="Update check succeeded" duration=140.656963ms grafana | logger=provisioning.dashboard t=2024-04-26T08:52:57.616632944Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2024-04-26T08:52:57.702447745Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana-apiserver t=2024-04-26T08:52:57.780680166Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-04-26T08:52:57.781195821Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=provisioning.dashboard t=2024-04-26T08:52:57.915376497Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-04-26T08:53:42.402810702Z level=info msg="Usage stats are ready to report" kafka | [2024-04-26 08:53:35,710] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,724] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,725] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,725] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,725] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,725] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,731] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,732] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,732] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,732] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,732] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,741] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,742] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,742] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,742] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,742] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,749] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,750] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,750] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,750] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,750] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,758] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,759] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,759] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,759] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,759] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,766] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,766] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,766] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,766] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,766] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,771] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,772] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,772] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,772] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,772] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,778] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,779] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,779] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,779] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,779] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,784] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:53:35,784] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:53:35,784] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,784] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:53:35,784] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(buULhLhhTIOJjtnuKy1oCQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:53:35,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-04-26 08:53:35,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-04-26 08:53:35,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-04-26 08:53:35,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-04-26 08:53:35,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-26 08:53:35,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,797] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,802] INFO [Broker id=1] Finished LeaderAndIsr request in 775ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-04-26 08:53:35,808] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=buULhLhhTIOJjtnuKy1oCQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=LsF0wvMZRGucbuSK9bj6lg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-26 08:53:35,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,810] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,815] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 17 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,816] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,816] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,816] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,816] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,817] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,822] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,822] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,822] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-26 08:53:35,823] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 25 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,823] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-26 08:53:35,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 29 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,830] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 32 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,830] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 32 milliseconds for epoch 0, of which 32 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,831] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 33 milliseconds for epoch 0, of which 32 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 37 milliseconds for epoch 0, of which 37 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 38 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 38 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 42 milliseconds for epoch 0, of which 42 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 42 milliseconds for epoch 0, of which 42 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 42 milliseconds for epoch 0, of which 42 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 42 milliseconds for epoch 0, of which 42 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 42 milliseconds for epoch 0, of which 42 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 47 milliseconds for epoch 0, of which 47 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 48 milliseconds for epoch 0, of which 48 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 48 milliseconds for epoch 0, of which 48 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 48 milliseconds for epoch 0, of which 48 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 48 milliseconds for epoch 0, of which 48 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 48 milliseconds for epoch 0, of which 48 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 49 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 49 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 49 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 49 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 49 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 50 milliseconds for epoch 0, of which 49 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 50 milliseconds for epoch 0, of which 50 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 50 milliseconds for epoch 0, of which 50 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 50 milliseconds for epoch 0, of which 50 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 50 milliseconds for epoch 0, of which 50 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,849] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 50 milliseconds for epoch 0, of which 50 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 51 milliseconds for epoch 0, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,850] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 51 milliseconds for epoch 0, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:53:35,870] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c2be2c80-205d-4227-951f-9a7c12c2d5ee in Empty state. Created a new member id consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,883] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,892] INFO [GroupCoordinator 1]: Preparing to rebalance group c2be2c80-205d-4227-951f-9a7c12c2d5ee in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:35,900] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:36,492] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 47b1e3a1-a4a9-4bf2-95ae-f10384287681 in Empty state. Created a new member id consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:36,496] INFO [GroupCoordinator 1]: Preparing to rebalance group 47b1e3a1-a4a9-4bf2-95ae-f10384287681 in state PreparingRebalance with old generation 0 (__consumer_offsets-27) (reason: Adding new member consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:38,905] INFO [GroupCoordinator 1]: Stabilized group c2be2c80-205d-4227-951f-9a7c12c2d5ee generation 1 (__consumer_offsets-37) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:38,919] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:38,931] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c2be2c80-205d-4227-951f-9a7c12c2d5ee-3-9d406649-fe3d-4ae6-beec-b09399ffbd74 for group c2be2c80-205d-4227-951f-9a7c12c2d5ee for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:38,931] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-d5fc21b9-d736-4544-aa25-242859a1ee13 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:39,498] INFO [GroupCoordinator 1]: Stabilized group 47b1e3a1-a4a9-4bf2-95ae-f10384287681 generation 1 (__consumer_offsets-27) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:53:39,515] INFO [GroupCoordinator 1]: Assignment received from leader consumer-47b1e3a1-a4a9-4bf2-95ae-f10384287681-2-8831a781-330f-42a7-8b90-ae48ec91c5ff for group 47b1e3a1-a4a9-4bf2-95ae-f10384287681 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping grafana ... Stopping kafka ... Stopping simulator ... Stopping mariadb ... Stopping prometheus ... Stopping zookeeper ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing kafka ... Removing simulator ... Removing mariadb ... Removing prometheus ... Removing zookeeper ... Removing simulator ... done Removing policy-api ... done Removing grafana ... done Removing policy-apex-pdp ... done Removing mariadb ... done Removing prometheus ... done Removing policy-db-migrator ... done Removing kafka ... done Removing zookeeper ... done Removing policy-pap ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.L7L2qXgREO ]] + rsync -av /tmp/tmp.L7L2qXgREO/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 919,127 bytes received 95 bytes 1,838,444.00 bytes/sec total size is 918,582 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2109 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13889641374949625813.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4701393863249499054.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5458218136361325092.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yR5Q from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yR5Q/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9052119006295990805.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15748407496748745424tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14076395221765903824.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9604653815919876902.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yR5Q from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yR5Q/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1571760498166254717.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17334307905692237403.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yR5Q from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-yR5Q/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins15787464538181187500.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-yR5Q from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-yR5Q/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1666 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-35298 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 847 25376 0 5942 30863 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:90:be:f8 brd ff:ff:ff:ff:ff:ff inet 10.30.106.178/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85888sec preferred_lft 85888sec inet6 fe80::f816:3eff:fe90:bef8/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ed:8f:a7:0d brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35298) 04/26/24 _x86_64_ (8 CPU) 08:48:16 LINUX RESTART (8 CPU) 08:49:01 tps rtps wtps bread/s bwrtn/s 08:50:01 112.16 44.68 67.49 1995.93 15748.84 08:51:01 121.45 23.20 98.25 2805.80 19875.75 08:52:01 131.26 0.35 130.91 44.13 70606.50 08:53:01 339.31 11.65 327.66 773.44 74962.46 08:54:01 89.09 0.38 88.70 25.46 12872.52 08:55:01 9.65 0.00 9.65 0.00 8421.28 08:56:01 49.78 0.03 49.74 1.73 10015.73 Average: 121.81 11.47 110.34 806.64 30357.58 08:49:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:50:01 30140072 31718768 2799148 8.50 68236 1821656 1448236 4.26 858472 1654092 154376 08:51:01 29564076 31686336 3375144 10.25 90300 2322680 1570944 4.62 972452 2073968 303848 08:52:01 27240300 31666180 5698920 17.30 128904 4484044 1397092 4.11 1018412 4220768 644440 08:53:01 25221220 30879048 7718000 23.43 151872 5627612 7216616 21.23 1909140 5229704 1572 08:54:01 23532296 29306144 9406924 28.56 155376 5736012 9184584 27.02 3566164 5249120 144 08:55:01 23704288 29478580 9234932 28.04 155560 5736172 8991812 26.46 3392004 5249152 352 08:56:01 25233000 31024944 7706220 23.40 156452 5763096 2516940 7.41 1916656 5245168 252 Average: 26376465 30822857 6562755 19.92 129529 4498753 4618032 13.59 1947614 4131710 157855 08:49:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:50:01 lo 1.47 1.47 0.17 0.17 0.00 0.00 0.00 0.00 08:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:50:01 ens3 379.15 244.26 1489.43 60.55 0.00 0.00 0.00 0.00 08:51:01 br-0415a66454dd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:51:01 lo 5.20 5.20 0.50 0.50 0.00 0.00 0.00 0.00 08:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:51:01 ens3 117.61 83.22 2540.71 11.81 0.00 0.00 0.00 0.00 08:52:01 br-0415a66454dd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:52:01 lo 4.47 4.47 0.46 0.46 0.00 0.00 0.00 0.00 08:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:52:01 ens3 634.73 312.83 15284.34 22.19 0.00 0.00 0.00 0.00 08:53:01 vethf79d0c9 0.05 0.10 0.00 0.01 0.00 0.00 0.00 0.00 08:53:01 br-0415a66454dd 0.57 0.53 0.05 0.29 0.00 0.00 0.00 0.00 08:53:01 veth1413776 0.08 0.25 0.01 0.02 0.00 0.00 0.00 0.00 08:53:01 veth66a2b9b 1.47 1.50 0.16 0.16 0.00 0.00 0.00 0.00 08:54:01 vethf79d0c9 3.27 3.88 0.62 0.40 0.00 0.00 0.00 0.00 08:54:01 br-0415a66454dd 1.27 1.27 1.51 1.58 0.00 0.00 0.00 0.00 08:54:01 veth1413776 45.93 39.78 17.31 39.89 0.00 0.00 0.00 0.00 08:54:01 veth66a2b9b 13.98 12.55 1.87 1.85 0.00 0.00 0.00 0.00 08:55:01 vethf79d0c9 3.20 4.67 0.66 0.36 0.00 0.00 0.00 0.00 08:55:01 br-0415a66454dd 1.30 1.47 0.33 0.18 0.00 0.00 0.00 0.00 08:55:01 veth1413776 0.32 0.32 0.58 0.02 0.00 0.00 0.00 0.00 08:55:01 veth66a2b9b 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 08:56:01 br-0415a66454dd 1.10 1.42 0.10 0.14 0.00 0.00 0.00 0.00 08:56:01 veth1413776 0.38 0.43 0.02 0.02 0.00 0.00 0.00 0.00 08:56:01 lo 34.89 34.89 6.21 6.21 0.00 0.00 0.00 0.00 08:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: br-0415a66454dd 0.60 0.67 0.28 0.31 0.00 0.00 0.00 0.00 Average: veth1413776 6.67 5.83 2.56 5.71 0.00 0.00 0.00 0.00 Average: lo 4.43 4.43 0.84 0.84 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35298) 04/26/24 _x86_64_ (8 CPU) 08:48:16 LINUX RESTART (8 CPU) 08:49:01 CPU %user %nice %system %iowait %steal %idle 08:50:01 all 11.52 0.00 0.88 2.75 0.04 84.81 08:50:01 0 4.15 0.00 0.43 0.32 0.02 95.08 08:50:01 1 3.48 0.00 0.46 7.27 0.05 88.75 08:50:01 2 12.53 0.00 0.80 0.54 0.03 86.10 08:50:01 3 33.98 0.00 2.01 6.87 0.07 57.08 08:50:01 4 18.36 0.00 1.32 0.67 0.07 79.58 08:50:01 5 6.79 0.00 0.65 0.62 0.03 91.91 08:50:01 6 7.94 0.00 0.80 5.14 0.05 86.07 08:50:01 7 5.09 0.00 0.57 0.50 0.02 93.82 08:51:01 all 9.59 0.00 0.85 3.16 0.04 86.36 08:51:01 0 0.67 0.00 0.33 1.58 0.00 97.41 08:51:01 1 1.23 0.00 0.38 1.23 0.08 97.07 08:51:01 2 27.04 0.00 1.79 2.39 0.08 68.70 08:51:01 3 32.13 0.00 1.71 3.95 0.07 62.14 08:51:01 4 9.03 0.00 1.10 6.40 0.02 83.45 08:51:01 5 4.27 0.00 0.62 0.17 0.02 94.93 08:51:01 6 0.57 0.00 0.57 9.58 0.03 89.25 08:51:01 7 1.92 0.00 0.28 0.02 0.02 97.76 08:52:01 all 7.69 0.00 3.17 10.71 0.06 78.37 08:52:01 0 8.79 0.00 3.06 0.12 0.12 87.91 08:52:01 1 6.57 0.00 3.44 24.08 0.08 65.82 08:52:01 2 7.61 0.00 3.69 34.08 0.07 54.54 08:52:01 3 8.06 0.00 3.09 1.28 0.05 87.51 08:52:01 4 7.36 0.00 2.82 3.63 0.03 86.15 08:52:01 5 9.62 0.00 2.27 0.76 0.03 87.31 08:52:01 6 7.21 0.00 2.72 21.57 0.05 68.45 08:52:01 7 6.24 0.00 4.18 0.44 0.05 89.09 08:53:01 all 10.86 0.00 3.56 9.03 0.05 76.48 08:53:01 0 10.88 0.00 4.15 28.84 0.07 56.06 08:53:01 1 9.08 0.00 3.50 4.28 0.08 83.06 08:53:01 2 9.87 0.00 4.04 20.99 0.05 65.04 08:53:01 3 12.48 0.00 3.21 11.91 0.05 72.35 08:53:01 4 10.48 0.00 3.18 1.36 0.05 84.92 08:53:01 5 10.04 0.00 3.01 3.28 0.05 83.62 08:53:01 6 12.64 0.00 3.83 0.70 0.05 82.78 08:53:01 7 11.44 0.00 3.60 1.19 0.05 83.72 08:54:01 all 24.31 0.00 2.71 1.29 0.08 71.61 08:54:01 0 16.47 0.00 2.24 0.67 0.07 80.56 08:54:01 1 21.59 0.00 2.47 0.05 0.12 75.78 08:54:01 2 32.54 0.00 3.43 0.05 0.08 63.89 08:54:01 3 27.93 0.00 3.45 1.05 0.08 67.49 08:54:01 4 19.69 0.00 2.02 1.68 0.08 76.52 08:54:01 5 25.23 0.00 2.65 5.27 0.05 66.80 08:54:01 6 25.50 0.00 2.64 0.23 0.07 71.56 08:54:01 7 25.52 0.00 2.80 1.29 0.07 70.33 08:55:01 all 2.91 0.00 0.33 0.76 0.06 95.94 08:55:01 0 3.37 0.00 0.32 0.00 0.03 96.28 08:55:01 1 3.67 0.00 0.43 0.00 0.05 95.85 08:55:01 2 2.94 0.00 0.22 0.00 0.03 96.81 08:55:01 3 2.69 0.00 0.33 0.00 0.05 96.92 08:55:01 4 2.03 0.00 0.22 0.10 0.05 97.60 08:55:01 5 2.57 0.00 0.35 5.94 0.07 91.07 08:55:01 6 3.02 0.00 0.28 0.07 0.08 96.55 08:55:01 7 2.92 0.00 0.52 0.00 0.10 96.46 08:56:01 all 1.55 0.00 0.44 0.77 0.05 97.19 08:56:01 0 1.42 0.00 0.37 0.15 0.03 98.03 08:56:01 1 1.96 0.00 0.40 0.52 0.05 97.07 08:56:01 2 1.12 0.00 0.47 0.18 0.05 98.18 08:56:01 3 0.87 0.00 0.43 0.12 0.05 98.53 08:56:01 4 2.35 0.00 0.36 0.12 0.05 97.12 08:56:01 5 1.59 0.00 0.45 5.01 0.03 92.92 08:56:01 6 2.12 0.00 0.48 0.08 0.08 97.22 08:56:01 7 0.98 0.00 0.53 0.03 0.03 98.42 Average: all 9.77 0.00 1.70 4.05 0.05 84.42 Average: 0 6.52 0.00 1.55 4.45 0.05 87.44 Average: 1 6.77 0.00 1.57 5.33 0.07 86.25 Average: 2 13.39 0.00 2.06 8.25 0.06 76.24 Average: 3 16.89 0.00 2.03 3.59 0.06 77.42 Average: 4 9.88 0.00 1.57 1.99 0.05 86.51 Average: 5 8.57 0.00 1.42 3.01 0.04 86.96 Average: 6 8.42 0.00 1.61 5.31 0.06 84.59 Average: 7 7.72 0.00 1.78 0.50 0.05 89.96