Started by upstream project "policy-pap-master-merge-java" build number 352 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137774 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-35271 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-rM8zFjTDPBtL/agent.2140 SSH_AGENT_PID=2142 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14149749062329732618.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14149749062329732618.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13866659602178692332.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-WerH lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-WerH/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.92 botocore==1.34.92 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins2165708490221253392.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins8569520744217647814.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.FET4DHaDjP ++ echo ROBOT_VENV=/tmp/tmp.FET4DHaDjP +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.FET4DHaDjP ++ source /tmp/tmp.FET4DHaDjP/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.FET4DHaDjP +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.FET4DHaDjP/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.FET4DHaDjP) ' '!=' x ']' +++ PS1='(tmp.FET4DHaDjP) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.FET4DHaDjP/src/onap ++ rm -rf /tmp/tmp.FET4DHaDjP/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo ehuxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.FET4DHaDjP/bin/activate + '[' -z /tmp/tmp.FET4DHaDjP/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.FET4DHaDjP/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.FET4DHaDjP ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.FET4DHaDjP/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.FET4DHaDjP) ' ++ '[' 'x(tmp.FET4DHaDjP) ' '!=' x ']' ++ PS1='(tmp.FET4DHaDjP) (tmp.FET4DHaDjP) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.2rpNlazw2W + cd /tmp/tmp.2rpNlazw2W + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:1dd97a95f6bcae15ec35d9d2c6a96d034d97ff5ce2273cf42b1c2549092a92a2 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:eb3daea3b81a46c89d44f314f21edba0e1d1b0915fd599185530e673a4f3e30f Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:15db3ed25bc2c5fcac7635cebf8ee909afbd4fd846efff231410c6f1346614e7 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating prometheus ... Creating mariadb ... Creating simulator ... Creating zookeeper ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating zookeeper ... done Creating kafka ... Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating simulator ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds grafana Up 12 seconds policy-pap Up 14 seconds kafka Up 15 seconds policy-api Up 16 seconds zookeeper Up 16 seconds simulator Up 11 seconds prometheus Up 13 seconds mariadb Up 18 seconds NAMES STATUS policy-apex-pdp Up 15 seconds grafana Up 17 seconds policy-pap Up 19 seconds kafka Up 20 seconds policy-api Up 21 seconds zookeeper Up 21 seconds simulator Up 16 seconds prometheus Up 18 seconds mariadb Up 23 seconds NAMES STATUS policy-apex-pdp Up 20 seconds grafana Up 22 seconds policy-pap Up 24 seconds kafka Up 25 seconds policy-api Up 26 seconds zookeeper Up 26 seconds simulator Up 21 seconds prometheus Up 23 seconds mariadb Up 28 seconds NAMES STATUS policy-apex-pdp Up 25 seconds grafana Up 27 seconds policy-pap Up 29 seconds kafka Up 30 seconds policy-api Up 31 seconds zookeeper Up 31 seconds simulator Up 26 seconds prometheus Up 28 seconds mariadb Up 33 seconds NAMES STATUS policy-apex-pdp Up 30 seconds grafana Up 32 seconds policy-pap Up 34 seconds kafka Up 35 seconds policy-api Up 36 seconds zookeeper Up 36 seconds simulator Up 31 seconds prometheus Up 33 seconds mariadb Up 38 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:22:05 up 4 min, 0 users, load average: 3.19, 1.36, 0.54 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.5 us, 2.8 sy, 0.0 ni, 79.4 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 30 seconds grafana Up 32 seconds policy-pap Up 34 seconds kafka Up 35 seconds policy-api Up 36 seconds zookeeper Up 36 seconds simulator Up 31 seconds prometheus Up 33 seconds mariadb Up 38 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 46b83cfe537e policy-apex-pdp 1.88% 185.4MiB / 31.41GiB 0.58% 7.14kB / 6.7kB 0B / 0B 48 63ef1719939e grafana 0.06% 53.22MiB / 31.41GiB 0.17% 18.8kB / 3.31kB 0B / 24.9MB 18 4332c31c3362 policy-pap 18.22% 489MiB / 31.41GiB 1.52% 33.7kB / 35.6kB 0B / 149MB 64 7c7374bf05f8 kafka 57.81% 383.8MiB / 31.41GiB 1.19% 68.6kB / 71.8kB 0B / 475kB 84 c589a517bbf1 policy-api 0.10% 451.5MiB / 31.41GiB 1.40% 988kB / 646kB 0B / 0B 52 09fae81f821c zookeeper 0.10% 95.71MiB / 31.41GiB 0.30% 52.3kB / 45.6kB 0B / 414kB 60 4a3fd8a3bc78 simulator 0.08% 119.9MiB / 31.41GiB 0.37% 1.15kB / 0B 0B / 0B 76 353097dba0ba prometheus 0.02% 18.37MiB / 31.41GiB 0.06% 1.52kB / 474B 225kB / 0B 13 7d1448dfd828 mariadb 0.02% 102.3MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 60.8MB 37 + echo + cd /tmp/tmp.2rpNlazw2W + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | pdpTypeC != pdpTypeA ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.2rpNlazw2W/output.xml Log: /tmp/tmp.2rpNlazw2W/log.html Report: /tmp/tmp.2rpNlazw2W/report.html + RESULT=1 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes grafana Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:23:55 up 6 min, 0 users, load average: 0.63, 1.02, 0.51 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.9 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes grafana Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 46b83cfe537e policy-apex-pdp 0.79% 180.2MiB / 31.41GiB 0.56% 56.7kB / 91kB 0B / 0B 52 63ef1719939e grafana 0.03% 56.63MiB / 31.41GiB 0.18% 19.9kB / 4.5kB 0B / 24.9MB 18 4332c31c3362 policy-pap 0.52% 472.2MiB / 31.41GiB 1.47% 2.47MB / 1.05MB 0B / 149MB 66 7c7374bf05f8 kafka 9.77% 391.7MiB / 31.41GiB 1.22% 237kB / 213kB 0B / 573kB 85 c589a517bbf1 policy-api 0.09% 454.9MiB / 31.41GiB 1.41% 2.45MB / 1.1MB 0B / 0B 55 09fae81f821c zookeeper 0.10% 96.68MiB / 31.41GiB 0.30% 55.2kB / 47.2kB 0B / 414kB 60 4a3fd8a3bc78 simulator 0.07% 120MiB / 31.41GiB 0.37% 1.37kB / 0B 0B / 0B 78 353097dba0ba prometheus 0.02% 24.3MiB / 31.41GiB 0.08% 191kB / 11.1kB 225kB / 0B 13 7d1448dfd828 mariadb 0.01% 103.5MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 61MB 28 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, grafana, policy-pap, kafka, policy-api, policy-db-migrator, zookeeper, simulator, prometheus, mariadb grafana | logger=settings t=2024-04-26T08:21:33.556913742Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-26T08:21:33Z grafana | logger=settings t=2024-04-26T08:21:33.557173965Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-26T08:21:33.557187436Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-26T08:21:33.557191106Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-26T08:21:33.557194436Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-26T08:21:33.557208387Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-26T08:21:33.557212687Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-26T08:21:33.557215797Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-26T08:21:33.557219157Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-26T08:21:33.557222297Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-26T08:21:33.557226697Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-26T08:21:33.557229728Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-26T08:21:33.557232828Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-26T08:21:33.557239218Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-26T08:21:33.557242148Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-26T08:21:33.557247148Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-26T08:21:33.557250139Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-26T08:21:33.557253079Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-26T08:21:33.557256069Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-26T08:21:33.557566995Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-26T08:21:33.557588487Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-26T08:21:33.558356656Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-26T08:21:33.559359177Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-26T08:21:33.560169068Z level=info msg="Migration successfully executed" id="create migration_log table" duration=809.821µs grafana | logger=migrator t=2024-04-26T08:21:33.563470568Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-26T08:21:33.564072279Z level=info msg="Migration successfully executed" id="create user table" duration=601.5µs grafana | logger=migrator t=2024-04-26T08:21:33.568235552Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-26T08:21:33.56897602Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=740.518µs grafana | logger=migrator t=2024-04-26T08:21:33.574769267Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-26T08:21:33.575849342Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.073176ms grafana | logger=migrator t=2024-04-26T08:21:33.579186273Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-26T08:21:33.580183294Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=997.041µs grafana | logger=migrator t=2024-04-26T08:21:33.583559848Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-26T08:21:33.584212431Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=653.684µs grafana | logger=migrator t=2024-04-26T08:21:33.59493205Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-26T08:21:33.598611929Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.679318ms grafana | logger=migrator t=2024-04-26T08:21:33.60294325Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-26T08:21:33.603820086Z level=info msg="Migration successfully executed" id="create user table v2" duration=873.846µs grafana | logger=migrator t=2024-04-26T08:21:33.607079713Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-26T08:21:33.608204951Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.125058ms grafana | logger=migrator t=2024-04-26T08:21:33.613886902Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-26T08:21:33.615160867Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.268596ms grafana | logger=migrator t=2024-04-26T08:21:33.61988684Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-26T08:21:33.620377844Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=493.425µs grafana | logger=migrator t=2024-04-26T08:21:33.623752467Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-26T08:21:33.624326317Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=575.45µs grafana | logger=migrator t=2024-04-26T08:21:33.631442911Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-26T08:21:33.633291717Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.848385ms grafana | logger=migrator t=2024-04-26T08:21:33.637897142Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-26T08:21:33.637939684Z level=info msg="Migration successfully executed" id="Update user table charset" duration=43.812µs grafana | logger=migrator t=2024-04-26T08:21:33.641242534Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-26T08:21:33.642997894Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.75509ms grafana | logger=migrator t=2024-04-26T08:21:33.646287682Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-26T08:21:33.646650961Z level=info msg="Migration successfully executed" id="Add missing user data" duration=363.209µs grafana | logger=migrator t=2024-04-26T08:21:33.651477719Z level=info msg="Executing migration" id="Add is_disabled column to user" policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | kafka (172.17.0.8:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.9:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-04-26T08:22:04.239+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-26T08:22:04.410+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-385d2de3-e329-4c2e-8254-58c110e4f277-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 385d2de3-e329-4c2e-8254-58c110e4f277 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-api | Waiting for mariadb port 3306... policy-api | Waiting for policy-db-migrator port 6824... policy-api | mariadb (172.17.0.2:3306) open policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-04-26T08:21:41.401+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-04-26T08:21:41.461+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-04-26T08:21:41.462+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-04-26T08:21:43.447+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-04-26T08:21:43.530+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 6 JPA repository interfaces. policy-api | [2024-04-26T08:21:43.936+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-26T08:21:43.937+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-26T08:21:44.635+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-04-26T08:21:44.645+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-26T08:21:44.648+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-04-26T08:21:44.648+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-04-26T08:21:44.740+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-04-26T08:21:44.740+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3210 ms policy-api | [2024-04-26T08:21:45.184+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-04-26T08:21:45.267+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-04-26T08:21:45.318+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-04-26T08:21:45.639+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-04-26T08:21:45.669+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-04-26T08:21:45.760+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb policy-api | [2024-04-26T08:21:45.762+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-04-26T08:21:47.668+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-04-26T08:21:47.671+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-04-26T08:21:48.662+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-04-26T08:21:49.508+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-04-26T08:21:50.663+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-04-26T08:21:50.863+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2fcc32ae, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5ef53e42, org.springframework.security.web.context.SecurityContextHolderFilter@54ce2da8, org.springframework.security.web.header.HeaderWriterFilter@5da1f9b9, org.springframework.security.web.authentication.logout.LogoutFilter@29726180, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1ef46efc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1b48c142, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3dc238ae, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@43ec61f0, org.springframework.security.web.access.ExceptionTranslationFilter@2929ef51, org.springframework.security.web.access.intercept.AuthorizationFilter@3405202c] policy-api | [2024-04-26T08:21:51.718+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-04-26T08:21:51.813+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-26T08:21:51.839+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-04-26T08:21:51.858+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.19 seconds (process running for 11.864) policy-api | [2024-04-26T08:22:08.892+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-04-26T08:22:08.892+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-04-26T08:22:08.893+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-04-26T08:22:09.214+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] grafana | logger=migrator t=2024-04-26T08:21:33.653119803Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.641143ms grafana | logger=migrator t=2024-04-26T08:21:33.656952929Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-26T08:21:33.658046035Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.092287ms grafana | logger=migrator t=2024-04-26T08:21:33.661113842Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-26T08:21:33.662935856Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.820313ms grafana | logger=migrator t=2024-04-26T08:21:33.666272597Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-26T08:21:33.678966927Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.68514ms grafana | logger=migrator t=2024-04-26T08:21:33.683593775Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-26T08:21:33.684433648Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=841.333µs grafana | logger=migrator t=2024-04-26T08:21:33.68702563Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-26T08:21:33.687161598Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=137.408µs grafana | logger=migrator t=2024-04-26T08:21:33.689198132Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-26T08:21:33.690279207Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.075815ms grafana | logger=migrator t=2024-04-26T08:21:33.693509673Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-26T08:21:33.694006308Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=496.465µs grafana | logger=migrator t=2024-04-26T08:21:33.698684828Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-26T08:21:33.699523341Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=838.013µs grafana | logger=migrator t=2024-04-26T08:21:33.702146595Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-26T08:21:33.702875443Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=728.188µs grafana | logger=migrator t=2024-04-26T08:21:33.711196709Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-26T08:21:33.712261243Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.064274ms grafana | logger=migrator t=2024-04-26T08:21:33.717799767Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-26T08:21:33.718930416Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.130049ms grafana | logger=migrator t=2024-04-26T08:21:33.721883557Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-26T08:21:33.722998904Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.115157ms grafana | logger=migrator t=2024-04-26T08:21:33.725944745Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-26T08:21:33.725973287Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.211µs grafana | logger=migrator t=2024-04-26T08:21:33.7312952Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-26T08:21:33.731961734Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=666.214µs grafana | logger=migrator t=2024-04-26T08:21:33.735376988Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-26T08:21:33.736399491Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.022613ms grafana | logger=migrator t=2024-04-26T08:21:33.739646488Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-26T08:21:33.740440928Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=795.22µs grafana | logger=migrator t=2024-04-26T08:21:33.744942089Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-26T08:21:33.745589702Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=647.583µs grafana | logger=migrator t=2024-04-26T08:21:33.750515264Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-26T08:21:33.755287129Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.768195ms grafana | logger=migrator t=2024-04-26T08:21:33.759703675Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-26T08:21:33.760569349Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=867.094µs grafana | logger=migrator t=2024-04-26T08:21:33.765008278Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-26T08:21:33.765768306Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=759.698µs grafana | logger=migrator t=2024-04-26T08:21:33.768437243Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-26T08:21:33.769520758Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.088445ms policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:21:33.772562385Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-26T08:21:33.773629179Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.066484ms grafana | logger=migrator t=2024-04-26T08:21:33.778075627Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-26T08:21:33.778831836Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=755.969µs grafana | logger=migrator t=2024-04-26T08:21:33.782131125Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-26T08:21:33.782548797Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=417.802µs grafana | logger=migrator t=2024-04-26T08:21:33.785567481Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-26T08:21:33.787026026Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.453764ms mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-04-26 8:21:27 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-26 8:21:27 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-26 8:21:27 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ grafana | logger=migrator t=2024-04-26T08:21:33.793460415Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-04-26T08:21:33.794002703Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=542.298µs grafana | logger=migrator t=2024-04-26T08:21:33.800056274Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-26T08:21:33.801007762Z level=info msg="Migration successfully executed" id="create star table" duration=951.338µs grafana | logger=migrator t=2024-04-26T08:21:33.804201426Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-26T08:21:33.805317663Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.115877ms grafana | logger=migrator t=2024-04-26T08:21:33.809776062Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-26T08:21:33.810651297Z level=info msg="Migration successfully executed" id="create org table v1" duration=875.855µs grafana | logger=migrator t=2024-04-26T08:21:33.815229632Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-26T08:21:33.816004812Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=774.679µs grafana | logger=migrator t=2024-04-26T08:21:33.819424437Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-26T08:21:33.820149944Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=724.477µs grafana | logger=migrator t=2024-04-26T08:21:33.823336507Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-26T08:21:33.824096206Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=759.349µs grafana | logger=migrator t=2024-04-26T08:21:33.827424557Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-26T08:21:33.828188586Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=762.929µs grafana | logger=migrator t=2024-04-26T08:21:33.834215585Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-26T08:21:33.835012016Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=796.281µs grafana | logger=migrator t=2024-04-26T08:21:33.842010574Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-26T08:21:33.842048856Z level=info msg="Migration successfully executed" id="Update org table charset" duration=39.082µs grafana | logger=migrator t=2024-04-26T08:21:33.845278671Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-26T08:21:33.845316404Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=38.143µs grafana | logger=migrator t=2024-04-26T08:21:33.848486426Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-26T08:21:33.848738229Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=256.902µs grafana | logger=migrator t=2024-04-26T08:21:33.853408218Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-26T08:21:33.85460325Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.194602ms grafana | logger=migrator t=2024-04-26T08:21:33.85948368Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-26T08:21:33.8606709Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.1863ms grafana | logger=migrator t=2024-04-26T08:21:33.8639836Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-26T08:21:33.864856775Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=873.495µs grafana | logger=migrator t=2024-04-26T08:21:33.867891651Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-26T08:21:33.868546165Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=654.133µs grafana | logger=migrator t=2024-04-26T08:21:33.872514687Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-26T08:21:33.873715829Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.200772ms grafana | logger=migrator t=2024-04-26T08:21:33.880148349Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-26T08:21:33.881792244Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.642574ms grafana | logger=migrator t=2024-04-26T08:21:33.891939693Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-26T08:21:33.899978746Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.040903ms grafana | logger=migrator t=2024-04-26T08:21:33.906148111Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-26T08:21:33.906722451Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=574.44µs grafana | logger=migrator t=2024-04-26T08:21:33.910100134Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-26T08:21:33.911506486Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.410032ms grafana | logger=migrator t=2024-04-26T08:21:33.916202887Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-26T08:21:33.917519285Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.316278ms policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:33.924300762Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-26T08:21:33.924807318Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=506.226µs grafana | logger=migrator t=2024-04-26T08:21:33.930141182Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-04-26T08:21:33.931497881Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.35871ms grafana | logger=migrator t=2024-04-26T08:21:33.936707008Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-04-26T08:21:33.936773591Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=67.003µs grafana | logger=migrator t=2024-04-26T08:21:34.004694572Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-04-26T08:21:34.007615953Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.921891ms grafana | logger=migrator t=2024-04-26T08:21:34.011052011Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724602 policy-apex-pdp | [2024-04-26T08:22:04.605+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-1, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-04-26T08:22:04.616+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-04-26T08:22:04.616+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-04-26T08:22:04.617+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-04-26T08:22:04.636+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2 policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-04-26T08:21:34.01297608Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.92774ms grafana | logger=migrator t=2024-04-26T08:21:34.017275132Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.019037623Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.762061ms grafana | logger=migrator t=2024-04-26T08:21:34.022640129Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.023492513Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=850.704µs grafana | logger=migrator t=2024-04-26T08:21:34.030282874Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.033974084Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.68868ms grafana | logger=migrator t=2024-04-26T08:21:34.038881847Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.03970398Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=821.932µs grafana | logger=migrator t=2024-04-26T08:21:34.042600499Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-04-26T08:21:34.043441633Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=840.964µs grafana | logger=migrator t=2024-04-26T08:21:34.04842461Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-04-26T08:21:34.048453242Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.372µs grafana | logger=migrator t=2024-04-26T08:21:34.056054174Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-04-26T08:21:34.05618193Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=133.896µs grafana | logger=migrator t=2024-04-26T08:21:34.059691392Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.062005581Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.3137ms grafana | logger=migrator t=2024-04-26T08:21:34.066917194Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.068437032Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.519588ms grafana | logger=migrator t=2024-04-26T08:21:34.076500409Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.079194658Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.693009ms grafana | logger=migrator t=2024-04-26T08:21:34.083616926Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.086929528Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.311871ms grafana | logger=migrator t=2024-04-26T08:21:34.091144975Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-04-26T08:21:34.091695963Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=550.188µs grafana | logger=migrator t=2024-04-26T08:21:34.09531286Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-04-26T08:21:34.096616967Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.307496ms grafana | logger=migrator t=2024-04-26T08:21:34.10092096Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-04-26T08:21:34.10209845Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.179941ms grafana | logger=migrator t=2024-04-26T08:21:34.106004792Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-04-26T08:21:34.106038473Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=31.621µs grafana | logger=migrator t=2024-04-26T08:21:34.109528834Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-04-26T08:21:34.110325515Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=796.511µs grafana | logger=migrator t=2024-04-26T08:21:34.115688272Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-04-26T08:21:34.116286243Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=597.851µs grafana | logger=migrator t=2024-04-26T08:21:34.120223035Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-26T08:21:34.126724011Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.495075ms grafana | logger=migrator t=2024-04-26T08:21:34.129896805Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-04-26T08:21:34.131492507Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.594902ms grafana | logger=migrator t=2024-04-26T08:21:34.136517696Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-04-26T08:21:34.137590772Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.074276ms grafana | logger=migrator t=2024-04-26T08:21:34.14201432Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-04-26T08:21:34.143199801Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.184991ms grafana | logger=migrator t=2024-04-26T08:21:34.146367435Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-04-26T08:21:34.146829289Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=461.923µs grafana | logger=migrator t=2024-04-26T08:21:34.150181952Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-04-26T08:21:34.151054937Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=872.494µs grafana | logger=migrator t=2024-04-26T08:21:34.160071642Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-04-26T08:21:34.162802563Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.730751ms grafana | logger=migrator t=2024-04-26T08:21:34.166854142Z level=info msg="Executing migration" id="Add index for dashboard_title" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-26 08:21:34,285] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:host.name=7c7374bf05f8 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,290] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,293] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-26 08:21:34,298] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-26 08:21:34,307] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-26 08:21:34,334] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-26 08:21:34,334] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-26 08:21:34,343] INFO Socket connection established, initiating session, client: /172.17.0.8:54350, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-26 08:21:34,378] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003a6a90000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-26 08:21:34,501] INFO Session: 0x1000003a6a90000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-26 08:21:34,502] INFO EventThread shut down for session: 0x1000003a6a90000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 385d2de3-e329-4c2e-8254-58c110e4f277 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | kafka | ===> Launching ... kafka | ===> Launching kafka ... grafana | logger=migrator t=2024-04-26T08:21:34.167835003Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=980.521µs policy-db-migrator | policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:21:35,224] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) mariadb | mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Starting temporary server prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql grafana | logger=migrator t=2024-04-26T08:21:34.171196206Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-26 08:21:35,533] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) zookeeper | ===> User mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.171555964Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=359.388µs policy-apex-pdp | sasl.mechanism = GSSAPI kafka | [2024-04-26 08:21:35,635] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.176987155Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-26 08:21:35,637] INFO starting (kafka.server.KafkaServer) zookeeper | ===> Configuring ... mariadb | 2024-04-26 8:21:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-26 8:21:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.17726839Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=280.605µs policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | [2024-04-26 08:21:35,637] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) zookeeper | ===> Running preflight checks ... mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: 128 rollback segments are active. prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.18153631Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-26 08:21:35,655] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: log sequence number 45452; transaction id 14 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.18269694Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.156669ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-26 08:21:35,660] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... mariadb | 2024-04-26 8:21:28 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-26 8:21:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-26 8:21:28 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. prometheus | ts=2024-04-26T08:21:32.341Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql grafana | logger=migrator t=2024-04-26T08:21:34.186407412Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:21:35,660] INFO Client environment:host.name=7c7374bf05f8 (org.apache.zookeeper.ZooKeeper) zookeeper | ===> Launching ... mariadb | 2024-04-26 8:21:28 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution prometheus | ts=2024-04-26T08:21:32.342Z caller=main.go:1129 level=info msg="Starting TSDB ..." policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.191278393Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.869141ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) zookeeper | ===> Launching zookeeper ... simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json mariadb | 2024-04-26 08:21:29+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: Creating user policy_user prometheus | ts=2024-04-26T08:21:32.343Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.202957715Z level=info msg="Executing migration" id="create data_source table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) zookeeper | [2024-04-26 08:21:32,740] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) simulator | overriding logback.xml simulator | 2024-04-26 08:21:34,931 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) prometheus | ts=2024-04-26T08:21:32.343Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.20458319Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.623205ms policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) zookeeper | [2024-04-26 08:21:32,746] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | Waiting for mariadb port 3306... simulator | 2024-04-26 08:21:35,035 INFO org.onap.policy.models.simulators starting mariadb | prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.208950455Z level=info msg="Executing migration" id="add index data_source.account_id" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) zookeeper | [2024-04-26 08:21:32,746] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | mariadb (172.17.0.2:3306) open simulator | 2024-04-26 08:21:35,036 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties mariadb | 2024-04-26 08:21:31+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.2µs policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.210523186Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.571111ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) zookeeper | [2024-04-26 08:21:32,746] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | Waiting for kafka port 9092... simulator | 2024-04-26 08:21:35,310 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION mariadb | prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql grafana | logger=migrator t=2024-04-26T08:21:34.214199566Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-apex-pdp | security.protocol = PLAINTEXT zookeeper | [2024-04-26 08:21:32,746] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | kafka (172.17.0.8:9092) open kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,315 INFO org.onap.policy.models.simulators starting A&AI simulator mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.215160316Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=960.471µs policy-apex-pdp | security.providers = null zookeeper | [2024-04-26 08:21:32,748] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | Waiting for api port 6969... kafka | [2024-04-26 08:21:35,662] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,436 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | #!/bin/bash -xv prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=27.722µs wal_replay_duration=292.284µs wbl_replay_duration=250ns total_replay_duration=350.117µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.220641619Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-apex-pdp | send.buffer.bytes = 131072 zookeeper | [2024-04-26 08:21:32,748] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | api (172.17.0.7:6969) open kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,448 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved prometheus | ts=2024-04-26T08:21:32.351Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.221514864Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=873.755µs policy-apex-pdp | session.timeout.ms = 45000 zookeeper | [2024-04-26 08:21:32,748] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,451 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. prometheus | ts=2024-04-26T08:21:32.351Z caller=main.go:1153 level=info msg="TSDB started" policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.225524751Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 zookeeper | [2024-04-26 08:21:32,748] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,458 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 mariadb | # prometheus | ts=2024-04-26T08:21:32.352Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-pap | kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:35,551 INFO Session workerName=node0 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); prometheus | ts=2024-04-26T08:21:32.354Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.83986ms db_storage=1.79µs remote_storage=2.35µs web_handler=870ns query_engine=1.3µs scrape=516.026µs scrape_sd=261.273µs notify=41.052µs notify_sd=21.161µs rules=2.4µs tracing=9.5µs policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | . ____ _ __ _ _ grafana | logger=migrator t=2024-04-26T08:21:34.226405806Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=876.455µs grafana | logger=migrator t=2024-04-26T08:21:34.229338907Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,115 INFO Using GSON for REST calls mariadb | # you may not use this file except in compliance with the License. prometheus | ts=2024-04-26T08:21:32.354Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | -------------- policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ grafana | logger=migrator t=2024-04-26T08:21:34.236112347Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.77258ms grafana | logger=migrator t=2024-04-26T08:21:34.242213361Z level=info msg="Executing migration" id="create data_source table v2" kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,210 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} mariadb | # You may obtain a copy of the License at prometheus | ts=2024-04-26T08:21:32.354Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." policy-db-migrator | policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ grafana | logger=migrator t=2024-04-26T08:21:34.243786753Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.573672ms grafana | logger=migrator t=2024-04-26T08:21:34.249036284Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,218 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} mariadb | # policy-db-migrator | policy-apex-pdp | ssl.engine.factory.class = null policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) grafana | logger=migrator t=2024-04-26T08:21:34.250052637Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.017153ms grafana | logger=migrator t=2024-04-26T08:21:34.253984739Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,227 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1945ms mariadb | # http://www.apache.org/licenses/LICENSE-2.0 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-apex-pdp | ssl.key.password = null policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / grafana | logger=migrator t=2024-04-26T08:21:34.259253532Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=5.261732ms grafana | logger=migrator t=2024-04-26T08:21:34.262320039Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,228 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4223 ms. mariadb | # policy-db-migrator | -------------- policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | =========|_|==============|___/=/_/_/_/ grafana | logger=migrator t=2024-04-26T08:21:34.262860758Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=540.648µs grafana | logger=migrator t=2024-04-26T08:21:34.265923785Z level=info msg="Executing migration" id="Add column with_credentials" kafka | [2024-04-26 08:21:35,666] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) simulator | 2024-04-26 08:21:36,241 INFO org.onap.policy.models.simulators starting SDNC simulator mariadb | # Unless required by applicable law or agreed to in writing, software policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | :: Spring Boot :: (v3.1.10) grafana | logger=migrator t=2024-04-26T08:21:34.267772421Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.843866ms grafana | logger=migrator t=2024-04-26T08:21:34.271871703Z level=info msg="Executing migration" id="Add secure json data column" kafka | [2024-04-26 08:21:35,670] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) simulator | 2024-04-26 08:21:36,249 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | # distributed under the License is distributed on an "AS IS" BASIS, policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.key = null policy-pap | grafana | logger=migrator t=2024-04-26T08:21:34.273538628Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.666105ms grafana | logger=migrator t=2024-04-26T08:21:34.280012132Z level=info msg="Executing migration" id="Update data_source table charset" kafka | [2024-04-26 08:21:35,676] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) simulator | 2024-04-26 08:21:36,250 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. policy-db-migrator | policy-apex-pdp | ssl.keystore.location = null policy-pap | [2024-04-26T08:21:53.911+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final grafana | logger=migrator t=2024-04-26T08:21:34.280049084Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=38.592µs grafana | logger=migrator t=2024-04-26T08:21:34.283169876Z level=info msg="Executing migration" id="Update initial version to 1" kafka | [2024-04-26 08:21:35,681] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) simulator | 2024-04-26 08:21:36,251 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # See the License for the specific language governing permissions and policy-db-migrator | policy-apex-pdp | ssl.keystore.password = null policy-pap | [2024-04-26T08:21:53.970+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) grafana | logger=migrator t=2024-04-26T08:21:34.283343754Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=174.638µs grafana | logger=migrator t=2024-04-26T08:21:34.285563779Z level=info msg="Executing migration" id="Add read_only data column" kafka | [2024-04-26 08:21:35,685] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) simulator | 2024-04-26 08:21:36,252 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 mariadb | # limitations under the License. policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-apex-pdp | ssl.keystore.type = JKS policy-pap | [2024-04-26T08:21:53.971+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" grafana | logger=migrator t=2024-04-26T08:21:34.288730453Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.162574ms grafana | logger=migrator t=2024-04-26T08:21:34.297110915Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" kafka | [2024-04-26 08:21:35,691] INFO Socket connection established, initiating session, client: /172.17.0.8:53476, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) simulator | 2024-04-26 08:21:36,256 INFO Session workerName=node0 mariadb | policy-db-migrator | -------------- policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | [2024-04-26T08:21:55.909+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. grafana | logger=migrator t=2024-04-26T08:21:34.297446972Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=334.567µs grafana | logger=migrator t=2024-04-26T08:21:34.303423131Z level=info msg="Executing migration" id="Update json_data with nulls" kafka | [2024-04-26 08:21:35,700] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003a6a90001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) simulator | 2024-04-26 08:21:36,329 INFO Using GSON for REST calls mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.provider = null policy-pap | [2024-04-26T08:21:55.997+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. grafana | logger=migrator t=2024-04-26T08:21:34.303659173Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=235.522µs grafana | logger=migrator t=2024-04-26T08:21:34.306145942Z level=info msg="Executing migration" id="Add uid column" kafka | [2024-04-26 08:21:35,704] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) mariadb | do policy-db-migrator | -------------- policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | [2024-04-26T08:21:56.437+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-04-26T08:21:34.31058208Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.435728ms grafana | logger=migrator t=2024-04-26T08:21:34.316632893Z level=info msg="Executing migration" id="Update uid value" kafka | [2024-04-26 08:21:36,015] INFO Cluster ID = qUquThiHQAKlsircSK68zw (kafka.server.KafkaServer) simulator | 2024-04-26 08:21:36,339 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" policy-db-migrator | policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | [2024-04-26T08:21:56.438+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-04-26T08:21:34.316837503Z level=info msg="Migration successfully executed" id="Update uid value" duration=204.86µs grafana | logger=migrator t=2024-04-26T08:21:34.319691021Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" kafka | [2024-04-26 08:21:36,019] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) simulator | 2024-04-26 08:21:36,342 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" policy-db-migrator | policy-apex-pdp | ssl.truststore.certificates = null policy-pap | [2024-04-26T08:21:57.039+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) grafana | logger=migrator t=2024-04-26T08:21:34.321247441Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.55435ms grafana | logger=migrator t=2024-04-26T08:21:34.325785135Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" kafka | [2024-04-26 08:21:36,091] INFO KafkaConfig values: simulator | 2024-04-26 08:21:36,342 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2060ms mariadb | done policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-apex-pdp | ssl.truststore.location = null policy-pap | [2024-04-26T08:21:57.048+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-04-26T08:21:34.327766688Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.984863ms grafana | logger=migrator t=2024-04-26T08:21:34.342762741Z level=info msg="Executing migration" id="create api_key table" kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-26 08:21:36,342 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4909 ms. simulator | 2024-04-26 08:21:36,366 INFO org.onap.policy.models.simulators starting SO simulator policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.password = null policy-pap | [2024-04-26T08:21:57.051+00:00|INFO|StandardService|main] Starting service [Tomcat] grafana | logger=migrator t=2024-04-26T08:21:34.344224647Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.460446ms grafana | logger=migrator t=2024-04-26T08:21:34.348641924Z level=info msg="Executing migration" id="add index api_key.account_id" kafka | alter.config.policy.class.name = null mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' simulator | 2024-04-26 08:21:36,368 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-04-26 08:21:36,369 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.truststore.type = JKS policy-pap | [2024-04-26T08:21:57.051+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] grafana | logger=migrator t=2024-04-26T08:21:34.350395295Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.752171ms grafana | logger=migrator t=2024-04-26T08:21:34.356694311Z level=info msg="Executing migration" id="add index api_key.key" kafka | alter.log.dirs.replication.quota.window.num = 11 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' simulator | 2024-04-26 08:21:36,371 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-26 08:21:36,371 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-db-migrator | -------------- policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | [2024-04-26T08:21:57.148+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext grafana | logger=migrator t=2024-04-26T08:21:34.35766925Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=975.01µs grafana | logger=migrator t=2024-04-26T08:21:34.368199124Z level=info msg="Executing migration" id="add index api_key.account_id_name" kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-26 08:21:36,379 INFO Session workerName=node0 policy-db-migrator | policy-apex-pdp | policy-pap | [2024-04-26T08:21:57.148+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3105 ms grafana | logger=migrator t=2024-04-26T08:21:34.369257419Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.057955ms grafana | logger=migrator t=2024-04-26T08:21:34.376324344Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" kafka | authorizer.class.name = mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' policy-db-migrator | policy-apex-pdp | [2024-04-26T08:22:04.643+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-04-26T08:21:57.537+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] grafana | logger=migrator t=2024-04-26T08:21:34.377927306Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.602472ms grafana | logger=migrator t=2024-04-26T08:21:34.383371267Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" kafka | auto.create.topics.enable = true mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-04-26T08:21:57.588+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final grafana | logger=migrator t=2024-04-26T08:21:34.383938407Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=571.04µs grafana | logger=migrator t=2024-04-26T08:21:34.387712931Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" kafka | auto.include.jmx.reporter = true mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | -------------- policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724643 policy-pap | [2024-04-26T08:21:57.928+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... grafana | logger=migrator t=2024-04-26T08:21:34.388565095Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=851.434µs grafana | logger=migrator t=2024-04-26T08:21:34.391441474Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" kafka | auto.leader.rebalance.enable = true mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-04-26T08:21:58.024+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@51288417 grafana | logger=migrator t=2024-04-26T08:21:34.402531646Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.089822ms grafana | logger=migrator t=2024-04-26T08:21:34.40513428Z level=info msg="Executing migration" id="create api_key table v2" kafka | background.threads = 10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | -------------- policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=41465092-4801-404b-834e-cb5739a089eb, alive=false, publisher=null]]: starting policy-pap | [2024-04-26T08:21:58.025+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=migrator t=2024-04-26T08:21:34.405772874Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=638.013µs grafana | logger=migrator t=2024-04-26T08:21:34.416522648Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" kafka | broker.heartbeat.interval.ms = 2000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | policy-apex-pdp | [2024-04-26T08:22:04.655+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | [2024-04-26T08:21:58.052+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect grafana | logger=migrator t=2024-04-26T08:21:34.418572744Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=2.048776ms grafana | logger=migrator t=2024-04-26T08:21:34.42313171Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" kafka | broker.id = 1 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' simulator | 2024-04-26 08:21:36,455 INFO Using GSON for REST calls policy-db-migrator | policy-apex-pdp | acks = -1 policy-pap | [2024-04-26T08:21:59.507+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] grafana | logger=migrator t=2024-04-26T08:21:34.423777743Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=646.203µs grafana | logger=migrator t=2024-04-26T08:21:34.428517768Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" kafka | broker.id.generation.enable = true mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' simulator | 2024-04-26 08:21:36,468 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | [2024-04-26T08:21:59.517+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' grafana | logger=migrator t=2024-04-26T08:21:34.429842365Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.327167ms grafana | logger=migrator t=2024-04-26T08:21:34.433357647Z level=info msg="Executing migration" id="copy api_key v1 to v2" kafka | broker.rack = null mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-26 08:21:36,470 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-db-migrator | -------------- policy-apex-pdp | batch.size = 16384 policy-pap | [2024-04-26T08:22:00.029+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository grafana | logger=migrator t=2024-04-26T08:21:34.43380459Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=446.613µs grafana | logger=migrator t=2024-04-26T08:21:34.440085735Z level=info msg="Executing migration" id="Drop old table api_key_v1" kafka | broker.session.timeout.ms = 9000 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' simulator | 2024-04-26 08:21:36,471 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2189ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | [2024-04-26T08:22:00.423+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository grafana | logger=migrator t=2024-04-26T08:21:34.440633083Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=545.119µs grafana | logger=migrator t=2024-04-26T08:21:34.44445086Z level=info msg="Executing migration" id="Update api_key table charset" kafka | client.quota.callback.class = null mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' simulator | 2024-04-26 08:21:36,471 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. policy-db-migrator | -------------- policy-apex-pdp | buffer.memory = 33554432 policy-pap | [2024-04-26T08:22:00.536+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository grafana | logger=migrator t=2024-04-26T08:21:34.444482551Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=32.612µs grafana | logger=migrator t=2024-04-26T08:21:34.447615043Z level=info msg="Executing migration" id="Add expires to api_key table" kafka | compression.type = producer mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-04-26 08:21:36,472 INFO org.onap.policy.models.simulators starting VFC simulator policy-db-migrator | policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | [2024-04-26T08:22:00.853+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-26T08:21:34.453051744Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.436061ms grafana | logger=migrator t=2024-04-26T08:21:34.457159615Z level=info msg="Executing migration" id="Add service account foreign key" kafka | connection.failed.authentication.delay.ms = 100 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' simulator | 2024-04-26 08:21:36,475 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | policy-apex-pdp | client.id = producer-1 policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-26T08:21:34.459843104Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.683039ms grafana | logger=migrator t=2024-04-26T08:21:34.463091472Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" kafka | connections.max.idle.ms = 600000 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-apex-pdp | compression.type = none policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-26T08:21:34.463265241Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=173.789µs grafana | logger=migrator t=2024-04-26T08:21:34.466498278Z level=info msg="Executing migration" id="Add last_used_at to api_key table" kafka | connections.max.reauth.ms = 0 mariadb | policy-db-migrator | -------------- policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:21:34.468938713Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.440335ms grafana | logger=migrator t=2024-04-26T08:21:34.477165398Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" kafka | control.plane.listener.name = null simulator | 2024-04-26 08:21:36,476 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-26T08:21:34.479746152Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.578324ms grafana | logger=migrator t=2024-04-26T08:21:34.483866685Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" kafka | controlled.shutdown.enable = true policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-apex-pdp | enable.idempotence = true policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-26T08:21:34.484640044Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=773.649µs grafana | logger=migrator t=2024-04-26T08:21:34.488396888Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" kafka | controlled.shutdown.max.retries = 3 policy-db-migrator | mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql policy-apex-pdp | interceptor.classes = [] policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-26T08:21:34.488896344Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=499.326µs grafana | logger=migrator t=2024-04-26T08:21:34.495097844Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" kafka | controlled.shutdown.retry.backoff.ms = 5000 simulator | 2024-04-26 08:21:36,481 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-26T08:21:34.496449014Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.35839ms grafana | logger=migrator t=2024-04-26T08:21:34.501961568Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" kafka | controller.listener.names = null simulator | 2024-04-26 08:21:36,482 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp policy-apex-pdp | linger.ms = 0 policy-pap | client.id = consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-1 grafana | logger=migrator t=2024-04-26T08:21:34.502778051Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=817.303µs grafana | logger=migrator t=2024-04-26T08:21:34.506779577Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" kafka | controller.quorum.append.linger.ms = 25 simulator | 2024-04-26 08:21:36,510 INFO Session workerName=node0 policy-db-migrator | -------------- mariadb | policy-apex-pdp | max.block.ms = 60000 policy-pap | client.rack = grafana | logger=migrator t=2024-04-26T08:21:34.507388758Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=609.352µs grafana | logger=migrator t=2024-04-26T08:21:34.515311037Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" kafka | controller.quorum.election.backoff.max.ms = 1000 simulator | 2024-04-26 08:21:36,575 INFO Using GSON for REST calls policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: Stopping temporary server policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-26T08:21:34.515887107Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=577.449µs grafana | logger=migrator t=2024-04-26T08:21:34.52564842Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" kafka | controller.quorum.election.timeout.ms = 1000 simulator | 2024-04-26 08:21:36,588 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} policy-db-migrator | -------------- mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd (initiated by: unknown): Normal shutdown policy-apex-pdp | max.request.size = 1048576 policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-26T08:21:34.525729565Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=79.244µs zookeeper | [2024-04-26 08:21:32,749] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) kafka | controller.quorum.fetch.timeout.ms = 2000 simulator | 2024-04-26 08:21:36,590 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-db-migrator | mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: FTS optimize thread exiting. policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | enable.auto.commit = true zookeeper | [2024-04-26 08:21:32,749] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) grafana | logger=migrator t=2024-04-26T08:21:34.529023065Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" kafka | controller.quorum.request.timeout.ms = 2000 simulator | 2024-04-26 08:21:36,591 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2309ms policy-db-migrator | mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Starting shutdown... policy-apex-pdp | metadata.max.idle.ms = 300000 policy-pap | exclude.internal.topics = true zookeeper | [2024-04-26 08:21:32,750] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) grafana | logger=migrator t=2024-04-26T08:21:34.529043506Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=21.031µs kafka | controller.quorum.retry.backoff.ms = 20 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql simulator | 2024-04-26 08:21:36,592 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4885 ms. mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool policy-apex-pdp | metric.reporters = [] policy-pap | fetch.max.bytes = 52428800 zookeeper | [2024-04-26 08:21:32,750] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | controller.quorum.voters = [] grafana | logger=migrator t=2024-04-26T08:21:34.532235291Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-04-26T08:21:34.534981963Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.746302ms simulator | 2024-04-26 08:21:36,594 INFO org.onap.policy.models.simulators started mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Buffer pool(s) dump completed at 240426 8:21:32 policy-apex-pdp | metrics.num.samples = 2 policy-pap | fetch.max.wait.ms = 500 zookeeper | [2024-04-26 08:21:32,750] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | controller.quota.window.num = 11 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.539742858Z level=info msg="Executing migration" id="Add encrypted dashboard json column" mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" policy-apex-pdp | metrics.recording.level = INFO policy-pap | fetch.min.bytes = 1 zookeeper | [2024-04-26 08:21:32,750] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | controller.quota.window.size.seconds = 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.545040212Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=5.280813ms mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Shutdown completed; log sequence number 347307; transaction id 298 policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | group.id = db954cd2-8764-4a44-90af-3bb7f2069f83 zookeeper | [2024-04-26 08:21:32,750] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) kafka | controller.socket.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.550068071Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd: Shutdown complete policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-pap | group.instance.id = null zookeeper | [2024-04-26 08:21:32,761] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) kafka | create.topic.policy.class.name = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.550189297Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=128.947µs mariadb | policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-pap | heartbeat.interval.ms = 3000 zookeeper | [2024-04-26 08:21:32,763] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) kafka | default.replication.factor = 1 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.555733033Z level=info msg="Executing migration" id="create quota table v1" mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: Temporary server stopped policy-apex-pdp | partitioner.class = null policy-pap | interceptor.classes = [] zookeeper | [2024-04-26 08:21:32,763] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) kafka | delegation.token.expiry.check.interval.ms = 3600000 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql grafana | logger=migrator t=2024-04-26T08:21:34.556963347Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.231874ms mariadb | policy-apex-pdp | partitioner.ignore.keys = false policy-pap | internal.leave.group.on.close = true zookeeper | [2024-04-26 08:21:32,765] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | delegation.token.expiry.time.ms = 86400000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.565288737Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-apex-pdp | receive.buffer.bytes = 32768 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false zookeeper | [2024-04-26 08:21:32,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) kafka | delegation.token.master.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.56592234Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=633.772µs mariadb | policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | isolation.level = read_uncommitted zookeeper | [2024-04-26 08:21:32,774] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) kafka | delegation.token.max.lifetime.ms = 604800000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.570814472Z level=info msg="Executing migration" id="Update quota table charset" mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer zookeeper | [2024-04-26 08:21:32,774] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) kafka | delegation.token.secret.key = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.570836643Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=23.241µs mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-apex-pdp | request.timeout.ms = 30000 policy-pap | max.partition.fetch.bytes = 1048576 zookeeper | [2024-04-26 08:21:32,774] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) kafka | delete.records.purgatory.purge.interval.requests = 1 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.577454614Z level=info msg="Executing migration" id="create plugin_setting table" mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Number of transaction pools: 1 policy-apex-pdp | retries = 2147483647 policy-pap | max.poll.interval.ms = 300000 zookeeper | [2024-04-26 08:21:32,774] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) kafka | delete.topic.enable = true policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-04-26T08:21:34.578312748Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=861.384µs mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-apex-pdp | retry.backoff.ms = 100 policy-pap | max.poll.records = 500 zookeeper | [2024-04-26 08:21:32,774] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) kafka | early.start.listeners = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.585434587Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | metadata.max.age.ms = 300000 zookeeper | [2024-04-26 08:21:32,774] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) kafka | fetch.max.bytes = 57671680 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.586547334Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.116137ms mariadb | 2024-04-26 8:21:32 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-apex-pdp | sasl.jaas.config = null policy-pap | metric.reporters = [] zookeeper | [2024-04-26 08:21:32,774] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) kafka | fetch.purgatory.purge.interval.requests = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.592182554Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" mariadb | 2024-04-26 8:21:32 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | metrics.num.samples = 2 zookeeper | [2024-04-26 08:21:32,774] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.59501279Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.836556ms mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | metrics.recording.level = INFO zookeeper | [2024-04-26 08:21:32,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.heartbeat.interval.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.597932132Z level=info msg="Executing migration" id="Update plugin_setting table charset" mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Completed initialization of buffer pool policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | metrics.sample.window.ms = 30000 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-04-26T08:21:34.597988165Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=49.342µs mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:host.name=09fae81f821c (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.max.session.timeout.ms = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.600910455Z level=info msg="Executing migration" id="create session table" mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: 128 rollback segments are active. policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | receive.buffer.bytes = 65536 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.max.size = 2147483647 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-04-26T08:21:34.601843253Z level=info msg="Migration successfully executed" id="create session table" duration=932.848µs mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | reconnect.backoff.max.ms = 1000 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.612188767Z level=info msg="Executing migration" id="Drop old table playlist table" mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-apex-pdp | sasl.login.class = null policy-pap | reconnect.backoff.ms = 50 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) kafka | group.consumer.min.session.timeout.ms = 45000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.612326784Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=126.376µs mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: log sequence number 347307; transaction id 299 policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | request.timeout.ms = 30000 kafka | group.consumer.session.timeout.ms = 45000 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.615054005Z level=info msg="Executing migration" id="Drop old table playlist_item table" mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | retry.backoff.ms = 100 kafka | group.coordinator.new.enable = false zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-04-26T08:21:34.615119449Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=63.123µs mariadb | 2024-04-26 8:21:33 0 [Note] Plugin 'FEEDBACK' is disabled. policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.client.callback.handler.class = null kafka | group.coordinator.threads = 1 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.617189415Z level=info msg="Executing migration" id="create playlist table v2" mariadb | 2024-04-26 8:21:33 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.jaas.config = null kafka | group.initial.rebalance.delay.ms = 3000 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.617877821Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=684.976µs mariadb | 2024-04-26 8:21:33 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | group.max.session.timeout.ms = 1800000 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.621838186Z level=info msg="Executing migration" id="create playlist item table v2" mariadb | 2024-04-26 8:21:33 0 [Note] Server socket created on IP: '0.0.0.0'. policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | group.max.size = 2147483647 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.622360322Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=507.155µs mariadb | 2024-04-26 8:21:33 0 [Note] Server socket created on IP: '::'. policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.kerberos.service.name = null kafka | group.min.session.timeout.ms = 6000 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.62754307Z level=info msg="Executing migration" id="Update playlist table charset" mariadb | 2024-04-26 8:21:33 0 [Note] mariadbd: ready for connections. policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | initial.broker.registration.timeout.ms = 60000 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql grafana | logger=migrator t=2024-04-26T08:21:34.627563151Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=20.481µs mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | inter.broker.listener.name = PLAINTEXT zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.630275831Z level=info msg="Executing migration" id="Update playlist_item table charset" mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Buffer pool(s) load completed at 240426 8:21:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.login.callback.handler.class = null kafka | inter.broker.protocol.version = 3.6-IV2 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.630313613Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=39.462µs mariadb | 2024-04-26 8:21:33 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | sasl.login.class = null kafka | kafka.metrics.polling.interval.secs = 10 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.63451756Z level=info msg="Executing migration" id="Add playlist column created_at" mariadb | 2024-04-26 8:21:33 49 [Warning] Aborted connection 49 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.login.connect.timeout.ms = null kafka | kafka.metrics.reporters = [] zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.639302347Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.785387ms mariadb | 2024-04-26 8:21:34 50 [Warning] Aborted connection 50 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.login.read.timeout.ms = null kafka | leader.imbalance.check.interval.seconds = 300 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.642698922Z level=info msg="Executing migration" id="Add playlist column updated_at" mariadb | 2024-04-26 8:21:35 109 [Warning] Aborted connection 109 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-04-26T08:21:34.645846175Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.146603ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.652271866Z level=info msg="Executing migration" id="drop preferences table v2" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-26T08:21:34.653798035Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=1.520798ms policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | log.cleaner.backoff.ms = 15000 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.657482425Z level=info msg="Executing migration" id="drop preferences table v3" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | log.cleaner.dedupe.buffer.size = 134217728 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.657559199Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=77.194µs policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.login.retry.backoff.ms = 100 kafka | log.cleaner.delete.retention.ms = 86400000 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.6620305Z level=info msg="Executing migration" id="create preferences table v3" policy-pap | sasl.mechanism = GSSAPI kafka | log.cleaner.enable = true zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:21:32,777] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-04-26T08:21:34.662787269Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=756.719µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-apex-pdp | security.protocol = PLAINTEXT zookeeper | [2024-04-26 08:21:32,778] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.665155362Z level=info msg="Executing migration" id="Update preferences table charset" policy-pap | sasl.oauthbearer.expected.audience = null kafka | log.cleaner.io.buffer.size = 524288 policy-apex-pdp | security.providers = null zookeeper | [2024-04-26 08:21:32,778] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-04-26 08:21:32,778] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-26T08:21:34.665175283Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=20.321µs kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | sasl.oauthbearer.expected.issuer = null zookeeper | [2024-04-26 08:21:32,778] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.668074322Z level=info msg="Executing migration" id="Add column team_id in preferences" kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.670325968Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.251966ms kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.672855089Z level=info msg="Executing migration" id="Update team_id column values in preferences" kafka | log.cleaner.min.compaction.lag.ms = 0 policy-apex-pdp | ssl.cipher.suites = null policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-04-26T08:21:34.672964484Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=109.615µs kafka | log.cleaner.threads = 1 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | sasl.oauthbearer.jwks.endpoint.url = null zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.682327287Z level=info msg="Executing migration" id="Add column week_start in preferences" kafka | log.cleanup.policy = [delete] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) grafana | logger=migrator t=2024-04-26T08:21:34.686216849Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.893301ms policy-apex-pdp | ssl.engine.factory.class = null policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | log.dir = /tmp/kafka-logs grafana | logger=migrator t=2024-04-26T08:21:34.69362652Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-apex-pdp | ssl.key.password = null policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | log.dirs = /var/lib/kafka/data policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:21:34.696916Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.28936ms policy-db-migrator | zookeeper | [2024-04-26 08:21:32,782] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) kafka | log.flush.interval.messages = 9223372036854775807 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:21:34.701672626Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-db-migrator | > upgrade 0460-pdppolicystatus.sql zookeeper | [2024-04-26 08:21:32,782] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) kafka | log.flush.interval.ms = null policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-26T08:21:34.701871996Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=199.56µs policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,782] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | ssl.keystore.key = null policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-26T08:21:34.705941866Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) zookeeper | [2024-04-26 08:21:32,782] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-apex-pdp | ssl.keystore.location = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:34.706942218Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=997.581µs policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,782] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | ssl.keystore.password = null policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:21:34.714166251Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | zookeeper | [2024-04-26 08:21:32,804] INFO Logging initialized @544ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) kafka | log.index.interval.bytes = 4096 policy-apex-pdp | ssl.keystore.type = JKS policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:21:34.714993074Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=830.153µs policy-db-migrator | zookeeper | [2024-04-26 08:21:32,924] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) kafka | log.index.size.max.bytes = 10485760 policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:21:34.718829091Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | > upgrade 0470-pdp.sql zookeeper | [2024-04-26 08:21:32,924] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) kafka | log.local.retention.bytes = -2 policy-apex-pdp | ssl.provider = null policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-26T08:21:34.719645373Z level=info msg="Migration successfully executed" id="create alert table v1" duration=815.722µs policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,941] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) kafka | log.local.retention.ms = -2 policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:21:34.724292663Z level=info msg="Executing migration" id="add index alert org_id & id " policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) zookeeper | [2024-04-26 08:21:32,967] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) kafka | log.message.downconversion.enable = true policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-26T08:21:34.725156358Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=863.625µs policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,968] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) kafka | log.message.format.version = 3.0-IV1 policy-apex-pdp | ssl.truststore.certificates = null policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:21:34.728702851Z level=info msg="Executing migration" id="add index alert state" policy-db-migrator | zookeeper | [2024-04-26 08:21:32,969] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-apex-pdp | ssl.truststore.location = null policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:21:34.729801868Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.100298ms policy-db-migrator | zookeeper | [2024-04-26 08:21:32,971] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-apex-pdp | ssl.truststore.password = null policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:21:34.734476309Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-db-migrator | > upgrade 0480-pdpstatistics.sql zookeeper | [2024-04-26 08:21:32,978] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:21:34.735812077Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.335728ms policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,990] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) kafka | log.message.timestamp.type = CreateTime policy-apex-pdp | ssl.truststore.type = JKS policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:21:34.742648871Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) zookeeper | [2024-04-26 08:21:32,991] INFO Started @731ms (org.eclipse.jetty.server.Server) kafka | log.preallocate = false policy-apex-pdp | transaction.timeout.ms = 60000 policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-26T08:21:34.743314435Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=664.705µs policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,991] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) kafka | log.retention.bytes = -1 policy-apex-pdp | transactional.id = null policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-26T08:21:34.746257447Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-db-migrator | zookeeper | [2024-04-26 08:21:32,994] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) kafka | log.retention.check.interval.ms = 300000 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-26T08:21:34.747133593Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=875.705µs policy-db-migrator | zookeeper | [2024-04-26 08:21:32,995] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) kafka | log.retention.hours = 168 policy-apex-pdp | policy-apex-pdp | [2024-04-26T08:22:04.663+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-26T08:21:34.750470064Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql zookeeper | [2024-04-26 08:21:32,996] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | log.retention.minutes = null policy-pap | ssl.provider = null policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-26T08:21:34.751686897Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.216713ms policy-db-migrator | -------------- zookeeper | [2024-04-26 08:21:32,997] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | log.retention.ms = null policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-26T08:21:34.757422873Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" zookeeper | [2024-04-26 08:21:33,012] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | log.roll.hours = 168 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724678 grafana | logger=migrator t=2024-04-26T08:21:34.767170016Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.747003ms zookeeper | [2024-04-26 08:21:33,012] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | log.roll.jitter.hours = 0 policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=41465092-4801-404b-834e-cb5739a089eb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-26T08:21:34.770172362Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" zookeeper | [2024-04-26 08:21:33,013] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) kafka | log.roll.jitter.ms = null policy-db-migrator | policy-pap | ssl.truststore.location = null policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-04-26T08:21:34.770935621Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=762.239µs zookeeper | [2024-04-26 08:21:33,013] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) kafka | log.roll.ms = null policy-db-migrator | policy-pap | ssl.truststore.password = null policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-04-26T08:21:34.776225034Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" zookeeper | [2024-04-26 08:21:33,018] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) kafka | log.segment.bytes = 1073741824 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-pap | ssl.truststore.type = JKS policy-apex-pdp | [2024-04-26T08:22:04.680+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-04-26T08:21:34.777116569Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=891.446µs zookeeper | [2024-04-26 08:21:33,018] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | log.segment.delete.delay.ms = 60000 policy-db-migrator | -------------- policy-apex-pdp | [2024-04-26T08:22:04.680+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-04-26T08:21:34.78641605Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" zookeeper | [2024-04-26 08:21:33,021] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) kafka | max.connection.creation.rate = 2147483647 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener zookeeper | [2024-04-26 08:21:33,021] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | max.connections = 2147483647 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.787051323Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=633.773µs zookeeper | [2024-04-26 08:21:33,022] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.791477051Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher kafka | max.connections.per.ip = 2147483647 zookeeper | [2024-04-26 08:21:33,032] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) grafana | logger=migrator t=2024-04-26T08:21:34.792207259Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=732.017µs policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher kafka | max.connections.per.ip.overrides = policy-db-migrator | zookeeper | [2024-04-26 08:21:33,033] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a kafka | max.incremental.fetch.session.cache.slots = 1000 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql grafana | logger=migrator t=2024-04-26T08:21:34.795235645Z level=info msg="Executing migration" id="create alert_notification table v1" zookeeper | [2024-04-26 08:21:33,047] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) kafka | message.max.bytes = 1048588 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.796269328Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.030154ms policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted zookeeper | [2024-04-26 08:21:33,048] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) grafana | logger=migrator t=2024-04-26T08:21:34.801420104Z level=info msg="Executing migration" id="Add column is_default" policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|ServiceManager|main] service manager starting Create REST server kafka | metadata.log.dir = null zookeeper | [2024-04-26 08:21:34,358] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=migrator t=2024-04-26T08:21:34.805063742Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.643218ms policy-apex-pdp | [2024-04-26T08:22:04.694+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.863234534Z level=info msg="Executing migration" id="Add column frequency" policy-apex-pdp | [] kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.867487534Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.262541ms policy-apex-pdp | [2024-04-26T08:22:04.696+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | metadata.log.segment.bytes = 1073741824 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.87575115Z level=info msg="Executing migration" id="Add column send_reminder" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b46d631f-bb6e-4436-9510-4ccf91eae87a","timestampMs":1714119724681,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} kafka | metadata.log.segment.min.bytes = 8388608 policy-pap | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-04-26T08:21:34.878624658Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.873298ms policy-apex-pdp | [2024-04-26T08:22:04.837+00:00|INFO|ServiceManager|main] service manager starting Rest Server kafka | metadata.log.segment.ms = 604800000 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.881070995Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|ServiceManager|main] service manager starting kafka | metadata.max.idle.interval.ms = 500 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) grafana | logger=migrator t=2024-04-26T08:21:34.886126916Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.054391ms policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters kafka | metadata.max.retention.bytes = 104857600 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119721120 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.889435297Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | metadata.max.retention.ms = 604800000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.890186186Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=751.499µs policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ServiceManager|main] service manager started kafka | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.894905959Z level=info msg="Executing migration" id="Update alert table charset" policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ServiceManager|main] service manager started kafka | metrics.num.samples = 2 policy-pap | [2024-04-26T08:22:01.131+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-1, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-04-26T08:21:34.894938241Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.432µs policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | metrics.recording.level = INFO policy-pap | [2024-04-26T08:22:01.132+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.899193431Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | metrics.sample.window.ms = 30000 policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-26T08:21:34.899221812Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=29.311µs grafana | logger=migrator t=2024-04-26T08:21:34.901428096Z level=info msg="Executing migration" id="create notification_journal table v1" policy-apex-pdp | [2024-04-26T08:22:05.013+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Cluster ID: qUquThiHQAKlsircSK68zw policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:34.902158603Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=730.027µs grafana | logger=migrator t=2024-04-26T08:21:34.905657294Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-apex-pdp | [2024-04-26T08:22:05.013+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qUquThiHQAKlsircSK68zw policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:34.906659665Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.004561ms grafana | logger=migrator t=2024-04-26T08:21:34.914011755Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | [2024-04-26T08:22:05.014+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | kafka | min.insync.replicas = 1 policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:21:34.914608116Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=596.341µs policy-apex-pdp | [2024-04-26T08:22:05.014+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql kafka | node.id = 1 policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-26T08:21:34.921497752Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-apex-pdp | [2024-04-26T08:22:05.021+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] (Re-)joining group policy-db-migrator | -------------- kafka | num.io.threads = 8 policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-26T08:21:34.922925376Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.433643ms policy-apex-pdp | [2024-04-26T08:22:05.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Request joining group due to: need to re-join with the given member-id: consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | num.network.threads = 3 grafana | logger=migrator t=2024-04-26T08:21:34.925851826Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-apex-pdp | [2024-04-26T08:22:05.036+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | num.partitions = 1 grafana | logger=migrator t=2024-04-26T08:21:34.926835907Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=984.431µs policy-apex-pdp | [2024-04-26T08:22:05.036+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] (Re-)joining group policy-db-migrator | kafka | num.recovery.threads.per.data.dir = 1 grafana | logger=migrator t=2024-04-26T08:21:34.929901706Z level=info msg="Executing migration" id="Add for to alert table" policy-apex-pdp | [2024-04-26T08:22:05.412+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | kafka | num.replica.alter.log.dirs.threads = null grafana | logger=migrator t=2024-04-26T08:21:34.933988247Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.086521ms policy-apex-pdp | [2024-04-26T08:22:05.414+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql kafka | num.replica.fetchers = 1 grafana | logger=migrator t=2024-04-26T08:21:34.939450538Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-apex-pdp | [2024-04-26T08:22:08.040+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Successfully joined group with generation Generation{generationId=1, memberId='consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99', protocol='range'} policy-db-migrator | -------------- kafka | offset.metadata.max.bytes = 4096 grafana | logger=migrator t=2024-04-26T08:21:34.943242244Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.791886ms policy-apex-pdp | [2024-04-26T08:22:08.049+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Finished assignment for group at generation 1: {consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) kafka | offsets.commit.required.acks = -1 grafana | logger=migrator t=2024-04-26T08:21:34.946077531Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-apex-pdp | [2024-04-26T08:22:08.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Successfully synced group in generation Generation{generationId=1, memberId='consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99', protocol='range'} policy-db-migrator | -------------- kafka | offsets.commit.timeout.ms = 5000 grafana | logger=migrator t=2024-04-26T08:21:34.946303802Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=226.062µs policy-apex-pdp | [2024-04-26T08:22:08.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | kafka | offsets.load.buffer.size = 5242880 grafana | logger=migrator t=2024-04-26T08:21:34.949107497Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-apex-pdp | [2024-04-26T08:22:08.058+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | kafka | offsets.retention.check.interval.ms = 600000 grafana | logger=migrator t=2024-04-26T08:21:34.950758521Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.651285ms policy-apex-pdp | [2024-04-26T08:22:08.066+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql kafka | offsets.retention.minutes = 10080 grafana | logger=migrator t=2024-04-26T08:21:34.957017315Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-apex-pdp | [2024-04-26T08:22:08.077+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- kafka | offsets.topic.compression.codec = 0 policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-26T08:21:34.958448488Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.435924ms policy-apex-pdp | [2024-04-26T08:22:24.682+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | offsets.topic.num.partitions = 50 policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-26T08:21:34.965503573Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- kafka | offsets.topic.replication.factor = 1 policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-04-26T08:21:34.969338391Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.835208ms policy-apex-pdp | [2024-04-26T08:22:24.706+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | offsets.topic.segment.bytes = 104857600 policy-pap | client.rack = grafana | logger=migrator t=2024-04-26T08:21:34.972322365Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-26T08:21:34.972418179Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=91.955µs policy-apex-pdp | [2024-04-26T08:22:24.708+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | > upgrade 0570-toscadatatype.sql kafka | password.encoder.iterations = 4096 policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-26T08:21:34.979554509Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-apex-pdp | [2024-04-26T08:22:24.838+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | password.encoder.key.length = 128 policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-26T08:21:34.980900058Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.35186ms policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) kafka | password.encoder.keyfactory.algorithm = null policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-26T08:21:34.985050611Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | -------------- kafka | password.encoder.old.secret = null grafana | logger=migrator t=2024-04-26T08:21:34.986056254Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.005083ms policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | password.encoder.secret = null policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-26T08:21:34.99063998Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-26T08:21:34.990735956Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=97.036µs policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0580-toscadatatypes.sql kafka | process.roles = [] policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-26T08:21:34.995162844Z level=info msg="Executing migration" id="create annotation table v5" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | producer.id.expiration.check.interval.ms = 600000 policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-26T08:21:34.996668371Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.504007ms policy-apex-pdp | [2024-04-26T08:22:24.867+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) kafka | producer.id.expiration.ms = 86400000 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-26T08:21:35.00536081Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | producer.purgatory.purge.interval.requests = 1000 policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-26T08:21:35.007069908Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.711749ms policy-apex-pdp | [2024-04-26T08:22:24.867+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | kafka | queued.max.request.bytes = -1 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-26T08:21:35.012465837Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-db-migrator | policy-apex-pdp | [2024-04-26T08:22:24.873+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | queued.max.requests = 500 grafana | logger=migrator t=2024-04-26T08:21:35.013910642Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.445235ms policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} kafka | quota.window.num = 11 policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-26T08:21:35.017619143Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-apex-pdp | [2024-04-26T08:22:24.874+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-26T08:21:35.018512199Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=889.345µs policy-db-migrator | -------------- kafka | quota.window.size.seconds = 1 policy-apex-pdp | [2024-04-26T08:22:24.894+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-26T08:21:35.023990432Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-26T08:21:35.025609115Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.618364ms policy-db-migrator | -------------- kafka | remote.log.manager.task.interval.ms = 30000 policy-apex-pdp | [2024-04-26T08:22:24.897+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-26T08:21:35.028959118Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-apex-pdp | [2024-04-26T08:22:24.905+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:35.030541599Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.581831ms policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 kafka | remote.log.manager.task.retry.jitter = 0.2 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:35.038835478Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | max.poll.records = 500 kafka | remote.log.manager.thread.pool.size = 10 policy-apex-pdp | [2024-04-26T08:22:24.905+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-26T08:21:35.038869859Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=38.792µs policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-apex-pdp | [2024-04-26T08:22:24.943+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:35.046983968Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | metric.reporters = [] kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:35.052785648Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.793519ms policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 kafka | remote.log.metadata.manager.class.path = null policy-apex-pdp | [2024-04-26T08:22:24.945+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:35.057609497Z level=info msg="Executing migration" id="Drop category_id index" policy-db-migrator | policy-pap | metrics.recording.level = INFO kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:35.05864157Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.033713ms policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 kafka | remote.log.metadata.manager.listener.name = null policy-apex-pdp | [2024-04-26T08:22:24.953+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:35.062574833Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | remote.log.reader.max.pending.tasks = 100 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:35.067986422Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.409119ms policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 kafka | remote.log.reader.threads = 10 policy-apex-pdp | [2024-04-26T08:22:24.954+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-26T08:21:35.071542646Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) kafka | remote.log.storage.manager.class.name = null policy-apex-pdp | [2024-04-26T08:22:56.149+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.3 - policyadmin [26/Apr/2024:08:22:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-26T08:21:35.07219985Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=656.064µs kafka | remote.log.storage.manager.class.path = null policy-apex-pdp | [2024-04-26T08:23:56.075+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.3 - policyadmin [26/Apr/2024:08:23:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-26T08:21:35.076563225Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 kafka | remote.log.storage.manager.impl.prefix = rsm.config. grafana | logger=migrator t=2024-04-26T08:21:35.077558917Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=994.202µs policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | remote.log.storage.system.enable = false grafana | logger=migrator t=2024-04-26T08:21:35.081027796Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-db-migrator | policy-pap | request.timeout.ms = 30000 kafka | replica.fetch.backoff.ms = 1000 grafana | logger=migrator t=2024-04-26T08:21:35.082194395Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.170309ms policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-pap | retry.backoff.ms = 100 kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:21:35.091663205Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-04-26T08:21:35.103063323Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.381177ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:21:35.108012948Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-04-26T08:21:35.109082814Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.073746ms policy-db-migrator | -------------- kafka | replica.fetch.min.bytes = 1 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.114201838Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.115200979Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=998.981µs policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-04-26T08:21:35.121451632Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.121833742Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=387.521µs policy-pap | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 policy-db-migrator | kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-pap | sasl.login.callback.handler.class = null kafka | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:21:35.126134854Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-db-migrator | -------------- policy-pap | sasl.login.class = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | sasl.login.connect.timeout.ms = null kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null policy-db-migrator | -------------- kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null policy-db-migrator | kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.read.timeout.ms = null kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null policy-db-migrator | kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | sasl.login.retry.backoff.ms = 100 kafka | socket.receive.buffer.bytes = 102400 grafana | logger=migrator t=2024-04-26T08:21:35.126852471Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=717.717µs policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.130638176Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-04-26T08:21:35.131071308Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=433.903µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) kafka | socket.request.max.bytes = 104857600 policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-26T08:21:35.136097188Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | -------------- kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-04-26T08:21:35.143627486Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.525549ms policy-db-migrator | kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-04-26T08:21:35.146743377Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-db-migrator | kafka | ssl.client.auth = none grafana | logger=migrator t=2024-04-26T08:21:35.149883629Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.139002ms policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:21:35.152638452Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-db-migrator | -------------- kafka | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-26T08:21:35.153613212Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=974.669µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:21:35.158520625Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-db-migrator | -------------- kafka | ssl.key.password = null grafana | logger=migrator t=2024-04-26T08:21:35.159596001Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.075106ms policy-db-migrator | kafka | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:21:35.16288736Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-db-migrator | kafka | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-26T08:21:35.163192926Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=305.826µs policy-db-migrator | > upgrade 0690-toscapolicy.sql kafka | ssl.keystore.key = null grafana | logger=migrator t=2024-04-26T08:21:35.168667839Z level=info msg="Executing migration" id="Add epoch_end column" policy-db-migrator | -------------- kafka | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:21:35.172704167Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.036398ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:21:35.175840939Z level=info msg="Executing migration" id="Add index for epoch_end" policy-db-migrator | -------------- kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-26T08:21:35.176646731Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=805.971µs policy-db-migrator | kafka | ssl.principal.mapping.rules = DEFAULT grafana | logger=migrator t=2024-04-26T08:21:35.182118833Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-db-migrator | kafka | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-26T08:21:35.182337474Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=218.671µs policy-db-migrator | > upgrade 0700-toscapolicytype.sql kafka | ssl.provider = null grafana | logger=migrator t=2024-04-26T08:21:35.186092469Z level=info msg="Executing migration" id="Move region to single row" policy-db-migrator | -------------- kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-26T08:21:35.186492339Z level=info msg="Migration successfully executed" id="Move region to single row" duration=399.791µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) kafka | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-26T08:21:35.193008935Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-db-migrator | -------------- kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-26T08:21:35.193764525Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=756.219µs policy-db-migrator | kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:21:35.198621604Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-db-migrator | kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:21:35.199331902Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=709.678µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-04-26T08:21:35.20279141Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.203770811Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=978.942µs kafka | transaction.max.timeout.ms = 900000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-04-26T08:21:35.207902924Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | transaction.partition.verification.enable = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.20880512Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=901.546µs kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.212969605Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" kafka | transaction.state.log.load.buffer.size = 5242880 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.214351157Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.381222ms kafka | transaction.state.log.min.isr = 2 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql grafana | logger=migrator t=2024-04-26T08:21:35.21790351Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" kafka | transaction.state.log.num.partitions = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.219004507Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.104997ms kafka | transaction.state.log.replication.factor = 3 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-26T08:21:35.222119348Z level=info msg="Executing migration" id="Increase tags column to length 4096" kafka | transaction.state.log.segment.bytes = 104857600 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.222186321Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=67.544µs kafka | transactional.id.expiration.ms = 604800000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.228337759Z level=info msg="Executing migration" id="create test_data table" kafka | unclean.leader.election.enable = false policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.230213826Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.875668ms kafka | unstable.api.versions.enable = false policy-db-migrator | > upgrade 0730-toscaproperty.sql grafana | logger=migrator t=2024-04-26T08:21:35.236959463Z level=info msg="Executing migration" id="create dashboard_version table v1" kafka | zookeeper.clientCnxnSocket = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.238653541Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.702528ms kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-26T08:21:35.243481451Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.2450202Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.53888ms policy-pap | sasl.oauthbearer.expected.audience = null kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.248668148Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 grafana | logger=migrator t=2024-04-26T08:21:35.249780326Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.112898ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.254806075Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-04-26T08:21:35.254997175Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=192.189µs policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql grafana | logger=migrator t=2024-04-26T08:21:35.258270794Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-04-26T08:21:35.258650653Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=379.649µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.262110121Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-04-26T08:21:35.262178405Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=66.764µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-04-26T08:21:35.266448885Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-04-26T08:21:35.266997864Z level=info msg="Migration successfully executed" id="create team table" duration=548.669µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.276111764Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-04-26T08:21:35.277151288Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.039604ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.280389115Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-04-26T08:21:35.28184931Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.460035ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.2855186Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-04-26T08:21:35.290637714Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.120164ms policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-04-26T08:21:35.301173188Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-04-26T08:21:35.301838732Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=673.485µs policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.309323499Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-04-26T08:21:35.310658208Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.334179ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-04-26T08:21:35.32098604Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-04-26T08:21:35.321860506Z level=info msg="Migration successfully executed" id="create team member table" duration=874.546µs policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.328674837Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-04-26T08:21:35.329901551Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.226433ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.336581736Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-04-26T08:21:35.337410258Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=828.002µs policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.341434075Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-04-26T08:21:35.342172534Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=738.019µs policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-26T08:21:35.346228793Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-04-26T08:21:35.351163008Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.933104ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.354571394Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-04-26T08:21:35.359249185Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.676682ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-26T08:21:35.362648841Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-04-26T08:21:35.367219356Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.569285ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.371254065Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-04-26T08:21:35.372193723Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=939.378µs grafana | logger=migrator t=2024-04-26T08:21:35.378241185Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-04-26T08:21:35.379312801Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.071856ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.384590113Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-04-26T08:21:35.385778295Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.187292ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.397247756Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-04-26T08:21:35.399407438Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.159861ms policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-04-26T08:21:35.404619787Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-04-26T08:21:35.406551996Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.934799ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.409834636Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-04-26T08:21:35.410755693Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=920.407µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-04-26T08:21:35.41513748Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-04-26T08:21:35.41631047Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.17156ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.419678154Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-04-26T08:21:35.420943739Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.264905ms policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.424517554Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-04-26T08:21:35.424967027Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=448.723µs policy-db-migrator | kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | > upgrade 0780-toscarequirements.sql kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.4294844Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-04-26T08:21:35.429825078Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=340.838µs policy-db-migrator | -------------- kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-04-26T08:21:35.435144382Z level=info msg="Executing migration" id="create tag table" policy-db-migrator | kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-04-26T08:21:35.436866492Z level=info msg="Migration successfully executed" id="create tag table" duration=1.721449ms policy-db-migrator | kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-04-26T08:21:35.446247415Z level=info msg="Executing migration" id="add index tag.key_value" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:21:35.447464498Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.218973ms policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:21:35.455161456Z level=info msg="Executing migration" id="create login attempt table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-04-26T08:21:35.456112114Z level=info msg="Migration successfully executed" id="create login attempt table" duration=950.928µs policy-db-migrator | -------------- kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-04-26T08:21:35.460028527Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-04-26T08:21:35.460940994Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=912.517µs policy-db-migrator | kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-04-26T08:21:35.465191013Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-04-26T08:21:35.466239848Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.049415ms policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | (kafka.server.KafkaConfig) kafka | [2024-04-26 08:21:36,122] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- kafka | [2024-04-26 08:21:36,123] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-26 08:21:36,136] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | [2024-04-26 08:21:36,129] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-26 08:21:36,171] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | [2024-04-26 08:21:36,175] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-04-26 08:21:36,184] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) kafka | [2024-04-26 08:21:36,186] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-04-26 08:21:36,187] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-04-26 08:21:36,200] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-04-26 08:21:36,246] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | -------------- kafka | [2024-04-26 08:21:36,265] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-04-26 08:21:36,282] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-26 08:21:36,347] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-04-26 08:21:36,712] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | -------------- kafka | [2024-04-26 08:21:36,734] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-04-26 08:21:36,734] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | kafka | [2024-04-26 08:21:36,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-04-26 08:21:36,745] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | kafka | [2024-04-26 08:21:36,770] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-26 08:21:36,772] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0820-toscatrigger.sql kafka | [2024-04-26 08:21:36,775] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-26 08:21:36,775] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-26 08:21:36,777] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-26 08:21:36,791] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-04-26 08:21:36,793] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-04-26 08:21:36,824] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-26 08:21:36,850] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714119696840,1714119696840,1,0,0,72057609718923265,258,0,27 kafka | (kafka.zk.KafkaZkClient) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-26T08:21:35.471635855Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-04-26 08:21:36,852] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-db-migrator | policy-pap | security.providers = null grafana | logger=migrator t=2024-04-26T08:21:35.485975106Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.332811ms kafka | [2024-04-26 08:21:36,922] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-26T08:21:35.48973645Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-04-26 08:21:36,929] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | session.timeout.ms = 45000 kafka | [2024-04-26 08:21:36,937] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:35.490355412Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=614.702µs kafka | [2024-04-26 08:21:36,937] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-26T08:21:35.494845594Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" kafka | [2024-04-26 08:21:36,945] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-26T08:21:35.495528849Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=683.265µs kafka | [2024-04-26 08:21:36,957] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-26T08:21:35.500305275Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" kafka | [2024-04-26 08:21:36,961] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-26 08:21:36,963] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-04-26 08:21:36,966] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-26 08:21:36,969] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-26 08:21:36,988] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.500912687Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=606.961µs kafka | [2024-04-26 08:21:36,991] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | ssl.keystore.key = null policy-db-migrator | kafka | [2024-04-26 08:21:36,991] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | ssl.keystore.location = null kafka | [2024-04-26 08:21:36,998] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- kafka | [2024-04-26 08:21:36,998] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:21:37,006] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,013] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.provider = null policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:21:37,018] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-04-26 08:21:37,035] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | ssl.truststore.certificates = null kafka | [2024-04-26 08:21:37,048] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.504432168Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-pap | ssl.truststore.location = null kafka | [2024-04-26 08:21:37,063] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) grafana | logger=migrator t=2024-04-26T08:21:35.505059181Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=626.823µs policy-pap | ssl.truststore.password = null kafka | [2024-04-26 08:21:37,070] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.509409275Z level=info msg="Executing migration" id="create user auth table" policy-pap | ssl.truststore.type = JKS kafka | [2024-04-26 08:21:37,074] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.510213767Z level=info msg="Migration successfully executed" id="create user auth table" duration=803.902µs policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-26 08:21:37,086] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-db-migrator | policy-pap | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-04-26 08:21:37,087] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:21:35.514304618Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,087] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:21:35.515276239Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=969.71µs policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | [2024-04-26 08:21:37,088] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119721138 kafka | [2024-04-26 08:21:37,088] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:01.139+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | kafka | [2024-04-26 08:21:37,089] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-04-26T08:21:35.522297121Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | [2024-04-26T08:22:01.507+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | kafka | [2024-04-26 08:21:37,091] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | [2024-04-26T08:22:01.721+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning kafka | [2024-04-26 08:21:37,092] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,092] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | [2024-04-26T08:22:01.995+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@cd93621, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3b1137b0, org.springframework.security.web.context.SecurityContextHolderFilter@20f99c18, org.springframework.security.web.header.HeaderWriterFilter@28269c65, org.springframework.security.web.authentication.logout.LogoutFilter@5ffdd510, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1870b9b8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@76e2a621, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2e7517aa, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@21ba0d33, org.springframework.security.web.access.ExceptionTranslationFilter@20518250, org.springframework.security.web.access.intercept.AuthorizationFilter@912747d] kafka | [2024-04-26 08:21:37,093] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) kafka | [2024-04-26 08:21:37,094] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | [2024-04-26T08:22:02.775+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,097] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.866+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | [2024-04-26 08:21:37,097] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | [2024-04-26T08:22:02.899+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | policy-pap | [2024-04-26T08:22:02.915+00:00|INFO|ServiceManager|main] Policy PAP starting kafka | [2024-04-26 08:21:37,100] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | [2024-04-26T08:22:02.916+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | [2024-04-26T08:22:02.916+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | [2024-04-26 08:21:37,110] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.917+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-04-26T08:21:35.522560104Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=274.494µs kafka | [2024-04-26 08:21:37,110] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | [2024-04-26T08:22:02.917+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher grafana | logger=migrator t=2024-04-26T08:21:35.526012902Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-04-26 08:21:37,110] INFO Kafka startTimeMs: 1714119697103 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.918+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher grafana | logger=migrator t=2024-04-26T08:21:35.531320307Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.308285ms kafka | [2024-04-26 08:21:37,112] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.918+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher grafana | logger=migrator t=2024-04-26T08:21:35.536871443Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" kafka | [2024-04-26 08:21:37,116] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.920+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4271b748 grafana | logger=migrator t=2024-04-26T08:21:35.540622276Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.748983ms kafka | [2024-04-26 08:21:37,117] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | [2024-04-26T08:22:02.933+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-26T08:21:35.545674107Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" kafka | [2024-04-26 08:21:37,130] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.934+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true kafka | [2024-04-26 08:21:37,134] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-04-26T08:21:35.549254892Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.580485ms policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-26 08:21:37,135] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.552213305Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-26 08:21:37,136] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.555984549Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.770454ms policy-pap | auto.offset.reset = latest kafka | [2024-04-26 08:21:37,137] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.559055008Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-26 08:21:37,145] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | check.crcs = true policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,145] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-26 08:21:37,155] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | client.id = consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3 policy-db-migrator | -------------- policy-pap | client.rack = policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-04-26 08:21:37,155] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-26T08:21:35.559957255Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=897.437µs policy-pap | enable.auto.commit = true kafka | [2024-04-26 08:21:37,156] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,157] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-26 08:21:37,159] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- kafka | [2024-04-26 08:21:37,176] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | policy-pap | fetch.min.bytes = 1 kafka | [2024-04-26 08:21:37,207] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | group.id = db954cd2-8764-4a44-90af-3bb7f2069f83 kafka | [2024-04-26 08:21:37,255] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | group.instance.id = null kafka | [2024-04-26 08:21:37,262] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql kafka | [2024-04-26 08:21:42,178] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-pap | internal.leave.group.on.close = true kafka | [2024-04-26 08:21:42,179] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,412] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-pap | isolation.level = read_uncommitted kafka | [2024-04-26 08:22:03,418] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) kafka | [2024-04-26 08:22:03,418] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-26 08:22:03,441] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.565318561Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-26 08:22:03,465] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(JNNo8CVWSdWgRv4ouhjw3w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.570368221Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.04583ms policy-pap | max.poll.records = 500 kafka | [2024-04-26 08:22:03,466] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.574328746Z level=info msg="Executing migration" id="create server_lock table" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-26 08:22:03,469] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-26T08:21:35.575161709Z level=info msg="Migration successfully executed" id="create server_lock table" duration=833.593µs policy-pap | metric.reporters = [] kafka | [2024-04-26 08:22:03,469] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.578030987Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | metrics.num.samples = 2 kafka | [2024-04-26 08:22:03,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-04-26T08:21:35.578711622Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=680.645µs policy-pap | metrics.recording.level = INFO kafka | [2024-04-26 08:22:03,474] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.586927656Z level=info msg="Executing migration" id="create user auth token table" policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-26 08:22:03,509] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.588099197Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.176491ms policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-26 08:22:03,512] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.593433822Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-26 08:22:03,513] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-26T08:21:35.594532719Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.099517ms policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-26 08:22:03,516] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.599211361Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-26 08:22:03,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-26T08:21:35.600785802Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.577521ms policy-pap | request.timeout.ms = 30000 kafka | [2024-04-26 08:22:03,517] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.611710476Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | retry.backoff.ms = 100 kafka | [2024-04-26 08:22:03,520] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | kafka | [2024-04-26 08:22:03,521] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-04-26 08:22:03,526] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(RfiyP89qRi-5ZTNhftzAtg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,526] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.612746709Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.027492ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:21:35.617188928Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:21:35.622728724Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.536436ms policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.626032695Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.626975133Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=945.588µs policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-26T08:21:35.631514748Z level=info msg="Executing migration" id="create cache_data table" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.632405104Z level=info msg="Migration successfully executed" id="create cache_data table" duration=890.565µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-26T08:21:35.636063082Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.637120497Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.050145ms kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-26T08:21:35.646127612Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.647732965Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.605573ms kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.653166375Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.654179507Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.013592ms policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-26T08:21:35.659408638Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.659476031Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.993µs policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.696604157Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.696938534Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=325.916µs policy-pap | security.providers = null policy-db-migrator | kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.704048721Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.705165619Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.115828ms policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.712648686Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | session.timeout.ms = 45000 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.71370408Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.055554ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.717906976Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.71892517Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.017743ms policy-pap | ssl.cipher.suites = null policy-db-migrator | kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.722493774Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-04-26T08:21:35.722559927Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=66.533µs policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.72590635Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-26T08:21:35.726910541Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.004371ms kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.key.password = null kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.732150192Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-26T08:21:35.733560075Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.408593ms kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.738134331Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.739812198Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.677707ms kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-26T08:21:35.743187691Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-26T08:21:35.744206024Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.015223ms kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.749206802Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.759007008Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.796216ms policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.762288928Z level=info msg="Executing migration" id="drop alert_definition table" policy-db-migrator | kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | ssl.provider = null kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-26T08:21:35.76485856Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.568462ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.774510238Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | [2024-04-26 08:22:03,534] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-26T08:21:35.774597702Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=88.034µs kafka | [2024-04-26 08:22:03,537] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.781773763Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.783234409Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.460567ms kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:35.78673755Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-26T08:21:35.787806244Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.067974ms kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:35.793845086Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-26T08:21:35.794848448Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.002892ms kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-26T08:21:35.799724389Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722940 grafana | logger=migrator t=2024-04-26T08:21:35.799792133Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=67.854µs kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-26T08:21:35.804073044Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher grafana | logger=migrator t=2024-04-26T08:21:35.805588022Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.514188ms kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0100-pdp.sql policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4bc9451b grafana | logger=migrator t=2024-04-26T08:21:35.811337719Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.81233505Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=999.131µs policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.819920862Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-db-migrator | -------------- policy-pap | allow.auto.create.topics = true kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.821589938Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.668666ms policy-db-migrator | policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.825819386Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.827413019Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.591213ms policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | auto.offset.reset = latest kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.831677899Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.837489919Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.81116ms policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | check.crcs = true kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.842296447Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.843350931Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.056234ms policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.848717518Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-db-migrator | policy-pap | client.rack = kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.849467437Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=750.709µs policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.855600644Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.881153472Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.547489ms policy-db-migrator | -------------- policy-pap | enable.auto.commit = true kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.887866559Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.909353758Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.4897ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | exclude.internal.topics = true kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.913842989Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-db-migrator | -------------- policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.914598188Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=755.629µs policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.922649424Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-db-migrator | policy-pap | fetch.min.bytes = 1 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.923755111Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.106217ms policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | group.id = policy-pap kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.930697889Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.935281186Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.585707ms policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:35.940044352Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-26T08:21:35.945402238Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.358097ms kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-26T08:21:35.951512374Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-26T08:21:35.952489544Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=976.96µs kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-26T08:21:35.962414696Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-26T08:21:35.963619978Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.205762ms kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-26T08:21:35.967731501Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-04-26T08:21:35.968549883Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=821.842µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-26T08:21:35.972616333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-04-26 08:22:03,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-26T08:21:35.973453136Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=834.653µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-26T08:21:35.977535887Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-26T08:21:35.977621291Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=85.734µs policy-db-migrator | kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:35.981438369Z level=info msg="Executing migration" id="add column for to alert_rule" policy-db-migrator | kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-26T08:21:35.986074747Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.635898ms policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-26T08:21:35.989730336Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:21:35.994124833Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.396577ms policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-26T08:21:35.997568181Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,544] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:36.003459575Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.890874ms policy-db-migrator | kafka | [2024-04-26 08:22:03,563] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-26T08:21:36.008464292Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-db-migrator | kafka | [2024-04-26 08:22:03,565] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:21:36.009629188Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.165076ms policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-04-26 08:22:03,565] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-26T08:21:36.016720568Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,653] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-26T08:21:36.01778652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.069083ms policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-04-26 08:22:03,676] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-26T08:21:36.023072588Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,682] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-26T08:21:36.027471943Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.399535ms policy-db-migrator | kafka | [2024-04-26 08:22:03,684] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:21:36.041558711Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-db-migrator | kafka | [2024-04-26 08:22:03,687] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(JNNo8CVWSdWgRv4ouhjw3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:21:36.047480341Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.92153ms policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-04-26 08:22:03,696] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.050331841Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-04-26 08:22:03,700] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-04-26T08:21:36.051180242Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=848.041µs kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-04-26T08:21:36.054250033Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-04-26T08:21:36.059434626Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.184733ms kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-04-26T08:21:36.064968106Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.069583092Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.614616ms kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.072235121Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.072384259Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=148.728µs kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-26T08:21:36.079469355Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.080832052Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.366097ms kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-04-26T08:21:36.084680019Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.085727101Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.047732ms kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.08878946Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.089879954Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.090404ms kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-04-26T08:21:36.094626005Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.094696648Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=71.003µs kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-04-26T08:21:36.09799291Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.104159711Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.166681ms kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.113938349Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.118663271Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.727931ms kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=migrator t=2024-04-26T08:21:36.121787293Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.126460131Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.672108ms kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=migrator t=2024-04-26T08:21:36.130491659Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.134880503Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.391324ms kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.145948494Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.150800251Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.854297ms kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-04-26T08:21:36.154014028Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.154108972Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=94.484µs kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-26T08:21:36.160336777Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.161478323Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.142807ms kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.16654579Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.174249887Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.703537ms kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-04-26T08:21:36.182473839Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.182525681Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=52.392µs kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-04-26T08:21:36.185770019Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.195981789Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.21242ms kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.20275209Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.203711566Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=961.806µs policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-26T08:21:36.208543693Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.214906194Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.3621ms policy-pap | ssl.provider = null kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-04-26T08:21:36.219193633Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.219972001Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=778.108µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.22382986Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.224771266Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=940.636µs policy-pap | ssl.truststore.location = null kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-26T08:21:36.227897499Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-pap | ssl.truststore.password = null kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.234720272Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.822352ms policy-pap | ssl.truststore.type = JKS kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-04-26T08:21:36.238224473Z level=info msg="Executing migration" id="create provenance_type table" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.23916699Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=941.966µs policy-pap | kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.248007921Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql grafana | logger=migrator t=2024-04-26T08:21:36.248984899Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=976.708µs policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.255374562Z level=info msg="Executing migration" id="create alert_image table" policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722946 kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-04-26T08:21:36.255970911Z level=info msg="Migration successfully executed" id="create alert_image table" duration=596.23µs policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.258902224Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.259584117Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=679.453µs kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-26T08:21:36.262545342Z level=info msg="Executing migration" id="support longer URLs in alert_image table" kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-26T08:21:36.262593794Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=49.042µs kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b203f36e-6d55-43c2-9716-adbeab74f0e0, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-26T08:21:36.268155246Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-04-26T08:22:02.961+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-26T08:21:36.269206318Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.050762ms kafka | [2024-04-26 08:22:03,711] INFO [Broker id=1] Finished LeaderAndIsr request in 192ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | acks = -1 grafana | logger=migrator t=2024-04-26T08:21:36.273625603Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-26T08:21:36.275403541Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.776978ms kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-04-26T08:21:36.279965793Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-26T08:21:36.280745501Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-04-26T08:21:36.286731384Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-26T08:21:36.287198276Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=465.322µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | client.id = producer-1 grafana | logger=migrator t=2024-04-26T08:21:36.292687635Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-db-migrator | kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | compression.type = none grafana | logger=migrator t=2024-04-26T08:21:36.29381644Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.129105ms policy-db-migrator | kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-26T08:21:36.298145512Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-26T08:21:36.304936754Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.790852ms policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=JNNo8CVWSdWgRv4ouhjw3w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-26T08:21:36.31058311Z level=info msg="Executing migration" id="create library_element table v1" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-26T08:21:36.311391069Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=808.869µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-26T08:21:36.315861387Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-26T08:21:36.316655306Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=794.009µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-26T08:21:36.320528946Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-26T08:21:36.321462501Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=932.745µs grafana | logger=migrator t=2024-04-26T08:21:36.33492199Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.337057224Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.137245ms kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.342861977Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-26T08:21:36.344115689Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.257022ms kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-26T08:21:36.347070453Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-26T08:21:36.347124735Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=54.982µs kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-26T08:21:36.34947468Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:36.34966448Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=189.41µs policy-db-migrator | kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-26T08:21:36.352870416Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-26T08:21:36.353294387Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=423.781µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-26T08:21:36.356305294Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-26T08:21:36.357127214Z level=info msg="Migration successfully executed" id="create data_keys table" duration=821.86µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-26T08:21:36.360052627Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:21:36.36072968Z level=info msg="Migration successfully executed" id="create secrets table" duration=679.243µs policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-26T08:21:36.3674722Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:36.400941616Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.469056ms policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-04-26T08:21:36.406121178Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | policy-pap | retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.412794295Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.671617ms policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.418700263Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.418917724Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=217.34µs policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.422054117Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.453440351Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.385844ms kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.459211063Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-04-26T08:21:36.495663495Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=36.454011ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.528048787Z level=info msg="Executing migration" id="create kv_store table v1" policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.528867838Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=819.691µs policy-pap | sasl.login.class = null kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.53362923Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.534898052Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.274913ms policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.542782937Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.543009669Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=227.222µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,731] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.550619861Z level=info msg="Executing migration" id="create permission table" policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | kafka | [2024-04-26 08:22:03,732] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.551531245Z level=info msg="Migration successfully executed" id="create permission table" duration=911.774µs policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | kafka | [2024-04-26 08:22:03,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.555637276Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.556611753Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=974.807µs policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.561261721Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.562028678Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=766.777µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.564972672Z level=info msg="Executing migration" id="create role table" policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.565597762Z level=info msg="Migration successfully executed" id="create role table" duration=625.12µs policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | msg kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.570546504Z level=info msg="Executing migration" id="add column display_name" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | upgrade to 1100 completed kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.576014942Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.469038ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.579893361Z level=info msg="Executing migration" id="add column group_name" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.58498262Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.087179ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.588293941Z level=info msg="Executing migration" id="add index role.org_id" policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.58927916Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=985.069µs policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.593672544Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.594758788Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.089524ms policy-pap | security.protocol = PLAINTEXT policy-db-migrator | kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.598808405Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-pap | security.providers = null policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.599836346Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.028101ms policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.608634085Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-26T08:21:36.609971201Z level=info msg="Migration successfully executed" id="create team role table" duration=1.336956ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.614769435Z level=info msg="Executing migration" id="add index team_role.org_id" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.616435737Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.666062ms policy-pap | ssl.cipher.suites = null kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.621233041Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-04-26T08:21:36.623323553Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.089842ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.626973882Z level=info msg="Executing migration" id="add index team_role.team_id" policy-pap | ssl.engine.factory.class = null kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.628886105Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.912213ms policy-pap | ssl.key.password = null kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.632195887Z level=info msg="Executing migration" id="create user role table" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=migrator t=2024-04-26T08:21:36.633020737Z level=info msg="Migration successfully executed" id="create user role table" duration=824.8µs policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.637071105Z level=info msg="Executing migration" id="add index user_role.org_id" policy-pap | ssl.keystore.key = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-26T08:21:36.638069695Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=998.26µs policy-pap | ssl.keystore.location = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.641359065Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | ssl.keystore.password = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.642453019Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.093184ms policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.649300404Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-04-26T08:21:36.651011417Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.706973ms policy-pap | ssl.provider = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.656105425Z level=info msg="Executing migration" id="create builtin role table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.657390498Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.284953ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.66233018Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-26 08:22:03,740] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.664011303Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.681432ms policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.66805901Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.66907179Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.01406ms policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | ssl.truststore.password = null kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.673564199Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.684919203Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.356665ms policy-db-migrator | policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.688587413Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | -------------- policy-pap | transactional.id = null kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.689312439Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=724.646µs policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.692497924Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | -------------- policy-pap | kafka | [2024-04-26 08:22:03,741] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.693534375Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.035971ms policy-db-migrator | kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.971+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-26T08:21:36.697479527Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.698498228Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.018861ms policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.702767636Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722984 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.704009847Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.242081ms policy-db-migrator | policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b203f36e-6d55-43c2-9716-adbeab74f0e0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.707318398Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9596763a-349e-4441-886a-d80f8a74994a, alive=false, publisher=null]]: starting kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.708089137Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=770.369µs policy-db-migrator | -------------- policy-pap | [2024-04-26T08:22:02.986+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.713372604Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-pap | acks = -1 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.714456618Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.083584ms policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.717553459Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | policy-pap | batch.size = 16384 kafka | [2024-04-26 08:22:03,742] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.728854951Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.241259ms policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-26 08:22:03,748] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.731533962Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | DROP TABLE pdpstatistics policy-pap | buffer.memory = 33554432 kafka | [2024-04-26 08:22:03,750] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.737814009Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.279057ms policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-26 08:22:03,750] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.742863285Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | client.id = producer-2 grafana | logger=migrator t=2024-04-26T08:21:36.750909349Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.045534ms policy-db-migrator | kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | compression.type = none policy-db-migrator | kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.757149054Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.765243519Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.093695ms policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.768436796Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | enable.idempotence = true kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.769469776Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.03255ms policy-pap | interceptor.classes = [] policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.774521102Z level=info msg="Executing migration" id="add permission action scope role_id index" policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.77568737Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.165728ms policy-pap | linger.ms = 0 policy-db-migrator | kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.779879455Z level=info msg="Executing migration" id="remove permission role_id action scope index" policy-pap | max.block.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-04-26T08:21:36.7814274Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.553996ms kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-04-26T08:21:36.786386452Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.787840753Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.454091ms kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-04-26T08:21:36.792492651Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-26T08:21:36.794755601Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.26525ms kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-26T08:21:36.800937194Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-26T08:21:36.801093781Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=156.727µs policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-26T08:21:36.804432025Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | name version kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-26T08:21:36.804474046Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=42.842µs policy-db-migrator | policyadmin 1300 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-26T08:21:36.80802121Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-26T08:21:36.808658531Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=644.942µs policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-26T08:21:36.811890709Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-26T08:21:36.812885648Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=996.348µs policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-26T08:21:36.816138717Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-26T08:21:36.817280342Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.141885ms policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-26T08:21:36.821707349Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.822034675Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=327.726µs policy-pap | request.timeout.ms = 30000 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.82480671Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | retries = 2147483647 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.82542297Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=616.14µs policy-pap | retry.backoff.ms = 100 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.828137563Z level=info msg="Executing migration" id="create query_history_star table v1" policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.829005125Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=866.892µs policy-pap | sasl.jaas.config = null policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 grafana | logger=migrator t=2024-04-26T08:21:36.834239552Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.83562929Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.388607ms kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-26T08:21:36.84281174Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-26T08:21:36.853548775Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.737975ms policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-26T08:21:36.858102558Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.858177591Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=74.603µs policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.class = null kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.861754436Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.862895162Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.137196ms policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.867524378Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.86879918Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.273962ms policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.874205665Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.876253435Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.04757ms policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.879787088Z level=info msg="Executing migration" id="add correlation config column" policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.888300633Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.513146ms policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.893643965Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.894762009Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.118224ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-26 08:22:03,763] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.898949464Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.899964344Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.01537ms policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.903027793Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.927782593Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.75462ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 grafana | logger=migrator t=2024-04-26T08:21:36.93221278Z level=info msg="Executing migration" id="create correlation v2" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.933099353Z level=info msg="Migration successfully executed" id="create correlation v2" duration=886.033µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.938809002Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.939896975Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.086283ms policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.946840555Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.948772558Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.932143ms policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 grafana | logger=migrator t=2024-04-26T08:21:36.954774512Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | security.providers = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.956202972Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.43213ms policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.960287462Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.960636508Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=348.716µs policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.965409622Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.cipher.suites = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.966330706Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=920.544µs policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.970341323Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.979364914Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.022231ms policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.engine.factory.class = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.982439164Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.key.password = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.983611231Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.171797ms policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.992085105Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.994164477Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.078852ms policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keystore.key = null kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.998501699Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keystore.location = null kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:36.999065076Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keystore.password = null kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.002346457Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.keystore.type = JKS kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.002874123Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.006134222Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.provider = null kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.007061308Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=926.236µs policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.01118492Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.012426661Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.23838ms policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | ssl.truststore.certificates = null kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.016297221Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | ssl.truststore.location = null kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.017496049Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.199688ms policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | ssl.truststore.password = null kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.020878755Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | ssl.truststore.type = JKS kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.022087314Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.208449ms policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.026795525Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | transactional.id = null kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.027955781Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.160096ms policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.031859233Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.033356736Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.496423ms policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.986+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.036620846Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.038249546Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.62767ms policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-26T08:21:37.042727326Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722989 grafana | logger=migrator t=2024-04-26T08:21:37.043974056Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.24629ms kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9596763a-349e-4441-886a-d80f8a74994a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-26T08:21:37.04974291Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator grafana | logger=migrator t=2024-04-26T08:21:37.051717635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.974036ms kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher grafana | logger=migrator t=2024-04-26T08:21:37.055632798Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.991+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.056844887Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.211649ms policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.991+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.060031053Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.993+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-04-26 08:22:03,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.06118425Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.153357ms policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-04-26 08:22:03,796] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.066816696Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-04-26 08:22:03,796] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.090759748Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.948103ms policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-04-26 08:22:03,797] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.095135673Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 policy-pap | [2024-04-26T08:22:02.995+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-04-26 08:22:03,801] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-04-26T08:21:37.102605409Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.468547ms policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:02.995+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-04-26 08:22:03,802] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.106602855Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:02.997+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-04-26 08:22:03,808] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.115408166Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.79667ms policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:02.998+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.787 seconds (process running for 10.378) kafka | [2024-04-26 08:22:03,809] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.121312436Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,809] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.121549397Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=234.952µs policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,810] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.124578425Z level=info msg="Executing migration" id="add share column" policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,810] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.136777693Z level=info msg="Migration successfully executed" id="add share column" duration=12.197578ms policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:03.402+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-26T08:21:37.140929987Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:03.403+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: qUquThiHQAKlsircSK68zw policy-pap | [2024-04-26T08:22:03.404+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: qUquThiHQAKlsircSK68zw grafana | logger=migrator t=2024-04-26T08:21:37.141151927Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=221.68µs policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,825] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:03.405+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qUquThiHQAKlsircSK68zw grafana | logger=migrator t=2024-04-26T08:21:37.144920451Z level=info msg="Executing migration" id="create file table" policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,826] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:03.478+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:21:37.146056887Z level=info msg="Migration successfully executed" id="create file table" duration=1.135766ms policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,826] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:03.478+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Cluster ID: qUquThiHQAKlsircSK68zw grafana | logger=migrator t=2024-04-26T08:21:37.15183225Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,826] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:03.499+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-26T08:21:37.153188127Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.355877ms policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,826] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:03.512+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 grafana | logger=migrator t=2024-04-26T08:21:37.157163742Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 kafka | [2024-04-26 08:22:03,838] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:03.534+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 grafana | logger=migrator t=2024-04-26T08:21:37.158264896Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.104725ms policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:03.605+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-26T08:21:37.162402808Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:37 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:37 policy-pap | [2024-04-26T08:22:03.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-26 08:22:03,842] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.163183257Z level=info msg="Migration successfully executed" id="create file_meta table" duration=778.769µs policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 policy-pap | [2024-04-26T08:22:03.714+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-26 08:22:03,842] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.166698439Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 policy-pap | [2024-04-26T08:22:03.728+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-26 08:22:03,842] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.167513539Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=814.97µs policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:03,842] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:04.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-04-26T08:21:37.17347431Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:03,853] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:04.461+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group grafana | logger=migrator t=2024-04-26T08:21:37.173526563Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=55.973µs policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:03,854] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 grafana | logger=migrator t=2024-04-26T08:21:37.176310749Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:03,854] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.176361962Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=51.343µs kafka | [2024-04-26 08:22:03,854] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 policy-pap | [2024-04-26T08:22:04.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=migrator t=2024-04-26T08:21:37.179207861Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-04-26 08:22:03,854] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:04.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] (Re-)joining group policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.179753448Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=544.907µs kafka | [2024-04-26 08:22:03,926] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Request joining group due to: need to re-join with the given member-id: consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.184967204Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-04-26 08:22:03,926] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.185204655Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=235.561µs kafka | [2024-04-26 08:22:03,927] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] (Re-)joining group policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.189543818Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-04-26 08:22:03,927] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:07.512+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4', protocol='range'} policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.191040421Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.495453ms kafka | [2024-04-26 08:22:03,927] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:07.519+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.197276247Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-04-26 08:22:04,044] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:07.541+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4', protocol='range'} policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.20877479Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.492814ms kafka | [2024-04-26 08:22:04,045] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:07.542+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.215112471Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-04-26 08:22:04,046] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:07.548+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 grafana | logger=migrator t=2024-04-26T08:21:37.215459277Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=348.516µs kafka | [2024-04-26 08:22:04,046] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:07.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Successfully joined group with generation Generation{generationId=1, memberId='consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9', protocol='range'} grafana | logger=migrator t=2024-04-26T08:21:37.21981225Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:04,046] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:07.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Finished assignment for group at generation 1: {consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=migrator t=2024-04-26T08:21:37.221060062Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.247552ms kafka | [2024-04-26 08:22:04,055] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:07.569+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Successfully synced group in generation Generation{generationId=1, memberId='consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9', protocol='range'} grafana | logger=migrator t=2024-04-26T08:21:37.224234868Z level=info msg="Executing migration" id="update group index for alert rules" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:04,056] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:07.569+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-04-26T08:21:37.224680669Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=446.391µs policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 kafka | [2024-04-26 08:22:04,056] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.229136308Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-04-26 08:22:04,056] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:07.570+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2604240821331100u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.229475004Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=338.386µs kafka | [2024-04-26 08:22:04,056] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:07.574+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.232961705Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-04-26 08:22:04,064] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:07.574+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.233529163Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=567.328µs kafka | [2024-04-26 08:22:04,065] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:07.593+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.23795961Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-04-26 08:22:04,065] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:07.593+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.250168268Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.210369ms kafka | [2024-04-26 08:22:04,066] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:10.706+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.255001635Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-04-26 08:22:04,066] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:10.707+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.262918783Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.916068ms kafka | [2024-04-26 08:22:04,072] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:10.709+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 grafana | logger=migrator t=2024-04-26T08:21:37.267487897Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-04-26 08:22:04,073] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.724+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-04-26T08:21:37.268741008Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.25285ms kafka | [2024-04-26 08:22:04,073] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [] grafana | logger=migrator t=2024-04-26T08:21:37.272735164Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-04-26 08:22:04,073] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.725+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.363629497Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=90.882803ms kafka | [2024-04-26 08:22:04,073] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-26T08:21:37.384348711Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-04-26 08:22:04,080] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.725+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:21:37.387553989Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=3.206217ms kafka | [2024-04-26 08:22:04,081] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.394659697Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} kafka | [2024-04-26 08:22:04,081] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.396245864Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.600438ms policy-pap | [2024-04-26T08:22:24.735+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-26 08:22:04,081] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.39962204Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-pap | [2024-04-26T08:22:24.810+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting kafka | [2024-04-26 08:22:04,082] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:24.810+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting listener grafana | logger=migrator t=2024-04-26T08:21:37.424431306Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.809486ms kafka | [2024-04-26 08:22:04,089] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.811+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting timer grafana | logger=migrator t=2024-04-26T08:21:37.42859655Z level=info msg="Executing migration" id="add origin column to seed_assignment" kafka | [2024-04-26 08:22:04,089] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.435071537Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.474227ms policy-pap | [2024-04-26T08:22:24.812+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] grafana | logger=migrator t=2024-04-26T08:21:37.437777369Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-pap | [2024-04-26T08:22:24.813+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting enqueue kafka | [2024-04-26 08:22:04,090] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.438099665Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=321.836µs policy-pap | [2024-04-26T08:22:24.813+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] kafka | [2024-04-26 08:22:04,090] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.441033838Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-pap | [2024-04-26T08:22:24.814+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate started grafana | logger=migrator t=2024-04-26T08:21:37.441364725Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=329.907µs kafka | [2024-04-26 08:22:04,090] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:24.815+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.446002553Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-04-26 08:22:04,098] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.446353049Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=350.207µs kafka | [2024-04-26 08:22:04,099] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.845+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-26 08:22:04,099] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.451596747Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,099] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.452258078Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=618.97µs policy-pap | [2024-04-26T08:22:24.846+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-26 08:22:04,099] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.458120336Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,106] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.458771528Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=651.992µs policy-pap | [2024-04-26T08:22:24.846+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-26 08:22:04,107] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.848+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-26 08:22:04,107] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.465115919Z level=info msg="Executing migration" id="create folder table" policy-pap | [2024-04-26T08:22:24.872+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-26 08:22:04,107] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.466263054Z level=info msg="Migration successfully executed" id="create folder table" duration=1.148386ms policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,108] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:24.872+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-26 08:22:04,115] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.471299072Z level=info msg="Executing migration" id="Add index for parent_uid" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,116] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.472812526Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.512724ms policy-pap | [2024-04-26T08:22:24.875+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping kafka | [2024-04-26 08:22:04,116] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.476070625Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-pap | [2024-04-26T08:22:24.875+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c089a3a3-4fc1-43c0-a7be-21299199c004 kafka | [2024-04-26 08:22:04,116] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping enqueue kafka | [2024-04-26 08:22:04,117] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.477415221Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.344325ms policy-pap | [2024-04-26T08:22:24.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping timer kafka | [2024-04-26 08:22:04,123] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.485407953Z level=info msg="Executing migration" id="Update folder title length" policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] kafka | [2024-04-26 08:22:04,123] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.485733618Z level=info msg="Migration successfully executed" id="Update folder title length" duration=323.576µs policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping listener kafka | [2024-04-26 08:22:04,124] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.490876201Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopped kafka | [2024-04-26 08:22:04,124] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.492311341Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.43623ms policy-pap | [2024-04-26T08:22:24.880+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-26 08:22:04,124] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.496197241Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} kafka | [2024-04-26 08:22:04,130] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.497369069Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.171818ms policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate successful kafka | [2024-04-26 08:22:04,131] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.501394546Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d start publishing next request kafka | [2024-04-26 08:22:04,131] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.502747522Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.352246ms policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting kafka | [2024-04-26 08:22:04,131] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.50597296Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting listener kafka | [2024-04-26 08:22:04,131] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.506520547Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=547.127µs policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting timer kafka | [2024-04-26 08:22:04,139] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.509813969Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] kafka | [2024-04-26 08:22:04,140] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.510201657Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=386.509µs policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting enqueue kafka | [2024-04-26 08:22:04,140] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.513437976Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange started kafka | [2024-04-26 08:22:04,140] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.514900187Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.461921ms policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] kafka | [2024-04-26 08:22:04,140] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.519161406Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" policy-pap | [2024-04-26T08:22:24.884+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-26 08:22:04,155] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.521446028Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.283292ms policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,156] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.525486256Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" policy-pap | [2024-04-26T08:22:24.914+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.527435322Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.950646ms kafka | [2024-04-26 08:22:04,157] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-26T08:21:37.532489589Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" kafka | [2024-04-26 08:22:04,157] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.533743991Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.253712ms policy-pap | [2024-04-26T08:22:24.915+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-26 08:22:04,158] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:24.916+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.53740057Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" kafka | [2024-04-26 08:22:04,163] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.538895493Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.494923ms policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,164] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.541804145Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-04-26 08:22:04,164] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.917+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-26 08:22:04,164] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.917+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.543087108Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.282733ms kafka | [2024-04-26 08:22:04,164] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.548695673Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-04-26 08:22:04,169] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping grafana | logger=migrator t=2024-04-26T08:21:37.549986126Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.290353ms kafka | [2024-04-26 08:22:04,170] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping enqueue grafana | logger=migrator t=2024-04-26T08:21:37.554043775Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-04-26 08:22:04,170] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping timer grafana | logger=migrator t=2024-04-26T08:21:37.55596972Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.925905ms kafka | [2024-04-26 08:22:04,170] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] grafana | logger=migrator t=2024-04-26T08:21:37.561257908Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-04-26 08:22:04,170] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.562600604Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.343426ms kafka | [2024-04-26 08:22:04,177] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping listener kafka | [2024-04-26 08:22:04,177] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.567127246Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopped grafana | logger=migrator t=2024-04-26T08:21:37.568548336Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.41926ms kafka | [2024-04-26 08:22:04,177] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange successful grafana | logger=migrator t=2024-04-26T08:21:37.571985144Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-04-26 08:22:04,177] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.573181542Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.196279ms policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d start publishing next request kafka | [2024-04-26 08:22:04,178] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.577274973Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-04-26 08:22:04,185] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting grafana | logger=migrator t=2024-04-26T08:21:37.577708215Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=433.882µs kafka | [2024-04-26 08:22:04,185] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting listener grafana | logger=migrator t=2024-04-26T08:21:37.581074569Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-04-26 08:22:04,185] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting timer grafana | logger=migrator t=2024-04-26T08:21:37.590668379Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.59319ms kafka | [2024-04-26 08:22:04,186] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=4a11c98e-6791-453e-9808-0827aeaec0c3, expireMs=1714119774932] grafana | logger=migrator t=2024-04-26T08:21:37.59414905Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-04-26 08:22:04,186] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting enqueue grafana | logger=migrator t=2024-04-26T08:21:37.595180531Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.032401ms kafka | [2024-04-26 08:22:04,193] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate started grafana | logger=migrator t=2024-04-26T08:21:37.600138594Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-26 08:22:04,198] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.933+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.602254697Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.115494ms kafka | [2024-04-26 08:22:04,198] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.606685874Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-04-26 08:22:04,198] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.937+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:21:37.6078154Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.129476ms kafka | [2024-04-26 08:22:04,198] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.612084619Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-26 08:22:04,209] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.937+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-04-26T08:21:37.613222624Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.137765ms kafka | [2024-04-26 08:22:04,210] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:24.942+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-26T08:21:37.617757937Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-04-26 08:22:04,210] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.619083262Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.324855ms kafka | [2024-04-26 08:22:04,210] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-26T08:21:37.622662687Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-04-26 08:22:04,210] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-26T08:21:37.625123387Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.46005ms kafka | [2024-04-26 08:22:04,217] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a5f1d0a2-79e5-4903-b04d-2fc825203dbc grafana | logger=migrator t=2024-04-26T08:21:37.630632967Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-26 08:22:04,217] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.632003134Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.370757ms policy-pap | [2024-04-26T08:22:24.945+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-26 08:22:04,217] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.645533678Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,218] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.646762907Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.23375ms policy-pap | [2024-04-26T08:22:24.945+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-26 08:22:04,218] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.65008841Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | [2024-04-26T08:22:24.951+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-26 08:22:04,224] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.650444358Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=356.668µs policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,224] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.653134119Z level=info msg="Executing migration" id="alter kv_store.value to longtext" policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping kafka | [2024-04-26 08:22:04,224] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.653222134Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=88.245µs policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping enqueue kafka | [2024-04-26 08:22:04,224] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.6568091Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping timer kafka | [2024-04-26 08:22:04,224] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.66907527Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.26629ms policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=4a11c98e-6791-453e-9808-0827aeaec0c3, expireMs=1714119774932] kafka | [2024-04-26 08:22:04,232] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-26T08:21:37.672224425Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping listener kafka | [2024-04-26 08:22:04,232] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-26T08:21:37.681798994Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.573568ms kafka | [2024-04-26 08:22:04,232] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.68438247Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopped kafka | [2024-04-26 08:22:04,232] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-26T08:21:37.684803661Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=420.751µs policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-26 08:22:04,232] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-26T08:21:37.687784487Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.128467893s policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-26 08:22:04,240] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore t=2024-04-26T08:21:37.698292802Z level=info msg="Created default admin" user=admin policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate successful kafka | [2024-04-26 08:22:04,241] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore t=2024-04-26T08:21:37.698713662Z level=info msg="Created default organization" policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4a11c98e-6791-453e-9808-0827aeaec0c3 kafka | [2024-04-26 08:22:04,241] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=secrets t=2024-04-26T08:21:37.706891283Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d has no more requests kafka | [2024-04-26 08:22:04,241] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-04-26T08:21:37.730472409Z level=info msg="Loading plugins..." policy-pap | [2024-04-26T08:22:31.170+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. kafka | [2024-04-26 08:22:04,241] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=local.finder t=2024-04-26T08:21:37.775970428Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-pap | [2024-04-26T08:22:31.214+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:22:04,247] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=plugin.store t=2024-04-26T08:21:37.776003019Z level=info msg="Plugins loaded" count=55 duration=45.529231ms policy-pap | [2024-04-26T08:22:31.221+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:22:04,248] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=query_data t=2024-04-26T08:21:37.782303128Z level=info msg="Query Service initialization" policy-pap | [2024-04-26T08:22:31.225+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-04-26 08:22:04,248] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=live.push_http t=2024-04-26T08:21:37.792896307Z level=info msg="Live Push Gateway initialization" policy-pap | [2024-04-26T08:22:31.634+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup kafka | [2024-04-26 08:22:04,248] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration t=2024-04-26T08:21:37.79949897Z level=info msg=Starting policy-pap | [2024-04-26T08:22:32.141+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup kafka | [2024-04-26 08:22:04,248] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:32.142+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup grafana | logger=ngalert.migration t=2024-04-26T08:21:37.799969403Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-26 08:22:04,255] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:32.694+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:21:37.800410115Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-26 08:22:04,255] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:32.913+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:21:37.801090358Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-04-26 08:22:04,255] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=ngalert.migration t=2024-04-26T08:21:37.803026902Z level=info msg="Completed alerting migration" kafka | [2024-04-26 08:22:04,256] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.838182775Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-04-26 08:22:04,256] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup grafana | logger=infra.usagestats.collector t=2024-04-26T08:21:37.841068727Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-04-26 08:22:04,262] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:33.034+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:22:32Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:22:33Z, user=policyadmin)] grafana | logger=provisioning.datasources t=2024-04-26T08:21:37.84399201Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-04-26 08:22:04,263] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup grafana | logger=provisioning.alerting t=2024-04-26T08:21:37.861576251Z level=info msg="starting to provision alerting" kafka | [2024-04-26 08:22:04,263] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 grafana | logger=provisioning.alerting t=2024-04-26T08:21:37.861601742Z level=info msg="finished to provision alerting" kafka | [2024-04-26 08:22:04,263] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 grafana | logger=grafanaStorageLogger t=2024-04-26T08:21:37.861899518Z level=info msg="Storage starting" kafka | [2024-04-26 08:22:04,263] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:33.704+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.862279146Z level=info msg="Warming state cache for startup" kafka | [2024-04-26 08:22:04,268] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:33.704+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup grafana | logger=ngalert.multiorg.alertmanager t=2024-04-26T08:21:37.864087575Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-04-26 08:22:04,268] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:33.718+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:22:33Z, user=policyadmin)] grafana | logger=http.server t=2024-04-26T08:21:37.864965987Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-04-26 08:22:04,269] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:34.073+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.941829753Z level=info msg="State cache has been initialized" states=0 duration=79.546778ms kafka | [2024-04-26 08:22:04,269] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup grafana | logger=provisioning.dashboard t=2024-04-26T08:21:37.942989159Z level=info msg="starting to provision dashboards" kafka | [2024-04-26 08:22:04,269] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 grafana | logger=ngalert.scheduler t=2024-04-26T08:21:37.941886406Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 kafka | [2024-04-26 08:22:04,274] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=plugins.update.checker t=2024-04-26T08:21:37.951119608Z level=info msg="Update check succeeded" duration=89.133827ms kafka | [2024-04-26 08:22:04,274] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup grafana | logger=ticker t=2024-04-26T08:21:37.951469196Z level=info msg=starting first_tick=2024-04-26T08:21:40Z kafka | [2024-04-26 08:22:04,274] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup grafana | logger=grafana.update.checker t=2024-04-26T08:21:37.971821703Z level=info msg="Update check succeeded" duration=107.400393ms kafka | [2024-04-26 08:22:04,274] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:34.163+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:22:34Z, user=policyadmin)] grafana | logger=sqlstore.transactions t=2024-04-26T08:21:37.985711342Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-04-26 08:22:04,274] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-26T08:22:54.800+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup grafana | logger=sqlstore.transactions t=2024-04-26T08:21:37.997524981Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-04-26 08:22:04,287] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-26T08:22:54.804+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup grafana | logger=grafana-apiserver t=2024-04-26T08:21:38.074428499Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-26 08:22:04,287] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-26T08:22:54.813+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] grafana | logger=grafana-apiserver t=2024-04-26T08:21:38.075074061Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-26 08:22:04,287] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-pap | [2024-04-26T08:22:54.882+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] grafana | logger=sqlstore.transactions t=2024-04-26T08:21:38.13690921Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-04-26 08:22:04,288] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=sqlstore.transactions t=2024-04-26T08:21:38.150624882Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-04-26 08:22:04,288] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=provisioning.dashboard t=2024-04-26T08:21:38.28040699Z level=info msg="finished to provision dashboards" kafka | [2024-04-26 08:22:04,300] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=infra.usagestats t=2024-04-26T08:23:03.874147633Z level=info msg="Usage stats are ready to report" kafka | [2024-04-26 08:22:04,300] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,300] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,300] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,301] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,309] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,309] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,309] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,309] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,310] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,319] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,320] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,320] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,320] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,320] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,328] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,328] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,328] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,329] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,329] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,337] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,337] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,337] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,337] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,338] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,343] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,344] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,344] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,344] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,345] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,355] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,356] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,356] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,356] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,357] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,364] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,365] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,365] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,365] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,365] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,371] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,372] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,372] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,372] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,372] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,378] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,378] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,378] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,379] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,379] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,385] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,385] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,386] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,386] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,386] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,392] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,392] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,393] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,393] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,393] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,400] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,401] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,401] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,401] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,401] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,407] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,407] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,407] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,408] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,408] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,414] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,415] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,415] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,415] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,415] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,421] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-26 08:22:04,422] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-26 08:22:04,422] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,422] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-26 08:22:04,422] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-04-26 08:22:04,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-26 08:22:04,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,445] INFO [Broker id=1] Finished LeaderAndIsr request in 696ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-04-26 08:22:04,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,447] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RfiyP89qRi-5ZTNhftzAtg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,451] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-04-26 08:22:04,451] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-26 08:22:04,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-26 08:22:04,481] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,495] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,556] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group db954cd2-8764-4a44-90af-3bb7f2069f83 in Empty state. Created a new member id consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:04,559] INFO [GroupCoordinator 1]: Preparing to rebalance group db954cd2-8764-4a44-90af-3bb7f2069f83 in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:05,034] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 385d2de3-e329-4c2e-8254-58c110e4f277 in Empty state. Created a new member id consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:05,037] INFO [GroupCoordinator 1]: Preparing to rebalance group 385d2de3-e329-4c2e-8254-58c110e4f277 in state PreparingRebalance with old generation 0 (__consumer_offsets-27) (reason: Adding new member consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:07,509] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:07,529] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:07,560] INFO [GroupCoordinator 1]: Stabilized group db954cd2-8764-4a44-90af-3bb7f2069f83 generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:07,566] INFO [GroupCoordinator 1]: Assignment received from leader consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 for group db954cd2-8764-4a44-90af-3bb7f2069f83 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:08,038] INFO [GroupCoordinator 1]: Stabilized group 385d2de3-e329-4c2e-8254-58c110e4f277 generation 1 (__consumer_offsets-27) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-26 08:22:08,053] INFO [GroupCoordinator 1]: Assignment received from leader consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 for group 385d2de3-e329-4c2e-8254-58c110e4f277 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping grafana ... Stopping policy-pap ... Stopping kafka ... Stopping policy-api ... Stopping zookeeper ... Stopping simulator ... Stopping prometheus ... Stopping mariadb ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing grafana ... Removing policy-pap ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing zookeeper ... Removing simulator ... Removing prometheus ... Removing mariadb ... Removing grafana ... done Removing mariadb ... done Removing simulator ... done Removing policy-api ... done Removing policy-apex-pdp ... done Removing policy-db-migrator ... done Removing kafka ... done Removing prometheus ... done Removing zookeeper ... done Removing policy-pap ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.2rpNlazw2W ]] + rsync -av /tmp/tmp.2rpNlazw2W/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,526 bytes received 95 bytes 1,837,242.00 bytes/sec total size is 917,984 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2142 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4067026727382834406.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17191936694961827262.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8889886100071594191.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18403376440070541822.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config3618601703336951193tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4329657085612354107.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15167466917401396788.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14339797825566615824.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8218995366673184191.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins154775296268774014.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1665 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-35271 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.996 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 851 25369 0 5945 30859 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:6b:82:b2 brd ff:ff:ff:ff:ff:ff inet 10.30.106.106/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85945sec preferred_lft 85945sec inet6 fe80::f816:3eff:fe6b:82b2/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:3c:48:d6:94 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35271) 04/26/24 _x86_64_ (8 CPU) 08:17:44 LINUX RESTART (8 CPU) 08:18:01 tps rtps wtps bread/s bwrtn/s 08:19:01 167.22 87.19 80.04 6257.49 50650.22 08:20:01 99.30 13.93 85.37 1141.14 19361.04 08:21:01 169.42 9.83 159.59 1740.91 49454.56 08:22:01 427.80 13.26 414.53 788.94 106560.51 08:23:01 22.85 0.25 22.60 12.53 9963.51 08:24:01 10.68 0.02 10.66 1.60 9394.93 08:25:01 75.55 1.40 74.15 107.45 13016.13 Average: 138.97 17.98 120.99 1435.72 36914.41 08:18:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:19:01 30157872 31666680 2781348 8.44 58784 1766956 1483968 4.37 916924 1587080 128512 08:20:01 29849028 31687596 3090192 9.38 84816 2050908 1424120 4.19 889912 1877312 179256 08:21:01 26014524 31638512 6924696 21.02 135796 5635000 1583232 4.66 1046852 5370456 2652484 08:22:01 24186888 29957804 8752332 26.57 154516 5734704 8217276 24.18 2891412 5270172 636 08:23:01 23929912 29706352 9009308 27.35 156044 5736476 8644824 25.44 3160748 5250660 488 08:24:01 23874624 29679028 9064596 27.52 156164 5763484 8730212 25.69 3198584 5266236 26024 08:25:01 25985136 31604912 6954084 21.11 157864 5596508 1534524 4.51 1311452 5108832 1484 Average: 26285426 30848698 6653794 20.20 129141 4612005 4516879 13.29 1916555 4247250 426983 08:18:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:19:01 lo 1.87 1.87 0.19 0.19 0.00 0.00 0.00 0.00 08:19:01 ens3 393.52 257.97 1499.45 60.62 0.00 0.00 0.00 0.00 08:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:20:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 08:20:01 ens3 51.36 36.63 713.26 8.38 0.00 0.00 0.00 0.00 08:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:21:01 lo 12.60 12.60 1.22 1.22 0.00 0.00 0.00 0.00 08:21:01 ens3 1022.73 535.39 29597.63 39.46 0.00 0.00 0.00 0.00 08:21:01 br-5ea65bf9defe 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:22:01 vethd372ab8 0.70 0.87 0.05 0.05 0.00 0.00 0.00 0.00 08:22:01 veth2fa051b 0.00 0.30 0.00 0.02 0.00 0.00 0.00 0.00 08:22:01 veth881873e 0.15 0.45 0.01 0.02 0.00 0.00 0.00 0.00 08:22:01 vethce41c7a 2.33 2.32 0.19 0.19 0.00 0.00 0.00 0.00 08:23:01 vethd372ab8 4.07 5.35 0.81 0.53 0.00 0.00 0.00 0.00 08:23:01 veth2fa051b 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 08:23:01 veth881873e 0.53 0.52 0.05 1.48 0.00 0.00 0.00 0.00 08:23:01 vethce41c7a 48.86 44.13 15.27 39.27 0.00 0.00 0.00 0.00 08:24:01 vethd372ab8 3.18 4.67 0.66 0.36 0.00 0.00 0.00 0.00 08:24:01 veth2fa051b 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 08:24:01 veth881873e 1.03 1.28 0.12 1.61 0.00 0.00 0.00 0.00 08:24:01 vethce41c7a 6.98 9.95 1.64 0.75 0.00 0.00 0.00 0.00 08:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:01 lo 34.73 34.73 6.21 6.21 0.00 0.00 0.00 0.00 08:25:01 ens3 1572.70 902.37 31896.71 147.51 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 4.53 4.53 0.85 0.85 0.00 0.00 0.00 0.00 Average: ens3 223.82 128.03 4555.90 20.97 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35271) 04/26/24 _x86_64_ (8 CPU) 08:17:44 LINUX RESTART (8 CPU) 08:18:01 CPU %user %nice %system %iowait %steal %idle 08:19:01 all 9.58 0.00 1.32 3.57 0.04 85.49 08:19:01 0 4.96 0.00 1.00 0.80 0.03 93.20 08:19:01 1 2.92 0.00 0.75 0.27 0.07 96.00 08:19:01 2 6.06 0.00 0.88 0.77 0.03 92.25 08:19:01 3 9.74 0.00 1.44 1.32 0.03 87.47 08:19:01 4 11.17 0.00 1.34 2.62 0.02 84.85 08:19:01 5 25.90 0.00 1.79 2.35 0.05 69.91 08:19:01 6 11.82 0.00 2.38 0.37 0.03 85.39 08:19:01 7 4.07 0.00 0.99 20.10 0.05 74.80 08:20:01 all 10.57 0.00 0.67 1.82 0.03 86.91 08:20:01 0 7.91 0.00 0.79 0.35 0.02 90.93 08:20:01 1 13.19 0.00 0.65 1.87 0.03 84.25 08:20:01 2 5.22 0.00 0.48 0.15 0.02 94.13 08:20:01 3 5.25 0.00 0.22 0.43 0.02 94.08 08:20:01 4 10.86 0.00 0.87 2.45 0.03 85.78 08:20:01 5 29.86 0.00 1.40 1.13 0.05 67.56 08:20:01 6 5.95 0.00 0.33 0.00 0.02 93.70 08:20:01 7 6.38 0.00 0.58 8.18 0.07 84.79 08:21:01 all 13.43 0.00 5.62 4.48 0.08 76.39 08:21:01 0 14.49 0.00 4.61 3.55 0.10 77.24 08:21:01 1 15.18 0.00 4.77 4.88 0.09 75.09 08:21:01 2 13.30 0.00 5.73 2.29 0.10 78.58 08:21:01 3 12.64 0.00 5.84 1.54 0.05 79.93 08:21:01 4 16.86 0.00 6.18 11.55 0.07 65.34 08:21:01 5 14.48 0.00 5.36 7.61 0.07 72.48 08:21:01 6 11.37 0.00 5.97 0.17 0.07 82.42 08:21:01 7 9.13 0.00 6.45 4.28 0.07 80.08 08:22:01 all 22.69 0.00 3.85 7.71 0.08 65.67 08:22:01 0 26.77 0.00 3.78 7.64 0.08 61.73 08:22:01 1 11.39 0.00 3.56 3.47 0.07 81.52 08:22:01 2 19.09 0.00 3.27 4.97 0.10 72.57 08:22:01 3 23.96 0.00 4.98 38.13 0.10 32.83 08:22:01 4 26.80 0.00 3.86 2.92 0.08 66.33 08:22:01 5 19.36 0.00 3.74 1.84 0.08 74.97 08:22:01 6 31.47 0.00 4.62 1.30 0.08 62.53 08:22:01 7 22.71 0.00 3.01 1.63 0.08 72.57 08:23:01 all 10.05 0.00 0.98 0.60 0.06 88.31 08:23:01 0 10.43 0.00 1.09 0.00 0.03 88.45 08:23:01 1 10.05 0.00 1.03 0.03 0.05 88.84 08:23:01 2 9.69 0.00 0.72 0.00 0.07 89.52 08:23:01 3 9.79 0.00 0.90 4.64 0.07 84.60 08:23:01 4 10.29 0.00 0.87 0.02 0.08 88.74 08:23:01 5 9.72 0.00 1.04 0.00 0.07 89.18 08:23:01 6 11.21 0.00 1.20 0.02 0.05 87.52 08:23:01 7 9.19 0.00 0.99 0.07 0.08 89.67 08:24:01 all 1.01 0.00 0.26 0.63 0.04 98.06 08:24:01 0 1.04 0.00 0.32 0.00 0.07 98.58 08:24:01 1 1.35 0.00 0.28 0.05 0.03 98.28 08:24:01 2 1.40 0.00 0.32 0.02 0.03 98.23 08:24:01 3 0.75 0.00 0.22 4.77 0.03 94.23 08:24:01 4 1.84 0.00 0.27 0.10 0.05 97.74 08:24:01 5 0.40 0.00 0.22 0.07 0.03 99.28 08:24:01 6 0.78 0.00 0.23 0.00 0.03 98.95 08:24:01 7 0.48 0.00 0.25 0.03 0.03 99.20 08:25:01 all 5.78 0.00 0.69 0.90 0.04 92.60 08:25:01 0 16.91 0.00 1.12 0.45 0.03 81.49 08:25:01 1 5.12 0.00 0.53 0.15 0.05 94.14 08:25:01 2 3.69 0.00 0.57 1.29 0.03 94.43 08:25:01 3 13.72 0.00 0.75 4.45 0.05 81.03 08:25:01 4 1.12 0.00 0.55 0.25 0.03 98.04 08:25:01 5 1.21 0.00 0.67 0.20 0.03 97.89 08:25:01 6 3.62 0.00 0.73 0.27 0.03 95.34 08:25:01 7 0.80 0.00 0.62 0.13 0.03 98.41 Average: all 10.42 0.00 1.90 2.81 0.05 84.81 Average: 0 11.77 0.00 1.81 1.82 0.05 84.55 Average: 1 8.43 0.00 1.64 1.52 0.06 88.35 Average: 2 8.33 0.00 1.70 1.35 0.06 88.57 Average: 3 10.80 0.00 2.04 7.85 0.05 79.26 Average: 4 11.26 0.00 1.98 2.82 0.05 83.89 Average: 5 14.42 0.00 2.02 1.87 0.06 81.63 Average: 6 10.86 0.00 2.20 0.30 0.05 86.59 Average: 7 7.52 0.00 1.83 4.92 0.06 85.68