Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-24270 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-vO4JGgQV3mD9/agent.2084 SSH_AGENT_PID=2086 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9284425168308043501.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9284425168308043501.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision f35d01581c8da55946d604e5a444972fe4b0d318 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=30 Commit message: "Improvements to CSIT" > git rev-list --no-walk f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1929568632304465149.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-H8Lq lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-H8Lq/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.87 botocore==1.34.87 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.35 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.34.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.3 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15546175447872111146.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins279420822397733023.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.yGPf2tn4SQ ++ echo ROBOT_VENV=/tmp/tmp.yGPf2tn4SQ +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.yGPf2tn4SQ ++ source /tmp/tmp.yGPf2tn4SQ/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.yGPf2tn4SQ +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.yGPf2tn4SQ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.yGPf2tn4SQ) ' '!=' x ']' +++ PS1='(tmp.yGPf2tn4SQ) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.yGPf2tn4SQ/src/onap ++ rm -rf /tmp/tmp.yGPf2tn4SQ/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.yGPf2tn4SQ/bin/activate + '[' -z /tmp/tmp.yGPf2tn4SQ/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.yGPf2tn4SQ/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.yGPf2tn4SQ ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.yGPf2tn4SQ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.yGPf2tn4SQ) ' ++ '[' 'x(tmp.yGPf2tn4SQ) ' '!=' x ']' ++ PS1='(tmp.yGPf2tn4SQ) (tmp.yGPf2tn4SQ) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.K0sYyH3Udx + cd /tmp/tmp.K0sYyH3Udx + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:76f202a4ce3fb449efc5539e6f77655fea2bbfecb1fbc1342810b45a9f33c637 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating mariadb ... Creating prometheus ... Creating simulator ... Creating zookeeper ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating simulator ... done Creating prometheus ... done Creating grafana ... Creating zookeeper ... done Creating kafka ... Creating grafana ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 12 seconds grafana Up 13 seconds policy-api Up 16 seconds zookeeper Up 14 seconds simulator Up 16 seconds mariadb Up 18 seconds prometheus Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 17 seconds grafana Up 18 seconds policy-api Up 21 seconds zookeeper Up 19 seconds simulator Up 21 seconds mariadb Up 23 seconds prometheus Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds grafana Up 23 seconds policy-api Up 26 seconds zookeeper Up 24 seconds simulator Up 26 seconds mariadb Up 28 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds grafana Up 28 seconds policy-api Up 31 seconds zookeeper Up 29 seconds simulator Up 31 seconds mariadb Up 33 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 33 seconds policy-api Up 37 seconds zookeeper Up 34 seconds simulator Up 36 seconds mariadb Up 38 seconds prometheus Up 35 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats ++ uname -s + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:14:55 up 4 min, 0 users, load average: 2.82, 1.25, 0.50 Tasks: 207 total, 1 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.0 us, 2.8 sy, 0.0 ni, 78.6 id, 5.5 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' + echo total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 33 seconds policy-api Up 37 seconds zookeeper Up 34 seconds simulator Up 37 seconds mariadb Up 39 seconds prometheus Up 35 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0f05ca022347 policy-apex-pdp 141.83% 191.1MiB / 31.41GiB 0.59% 7.12kB / 6.8kB 0B / 0B 48 cd968b74ff1a policy-pap 2.52% 513.1MiB / 31.41GiB 1.59% 31kB / 32.8kB 0B / 149MB 63 9fd3e2c8b405 kafka 0.77% 384.8MiB / 31.41GiB 1.20% 71.9kB / 73.7kB 0B / 500kB 83 b218cb6f2c5a grafana 0.03% 54.37MiB / 31.41GiB 0.17% 19.2kB / 3.58kB 0B / 24.9MB 14 98c23f0e8294 policy-api 0.11% 466.3MiB / 31.41GiB 1.45% 988kB / 646kB 0B / 0B 52 00ec3651d3eb zookeeper 0.09% 101.2MiB / 31.41GiB 0.31% 56.5kB / 51.2kB 0B / 479kB 60 1bff2dcb8737 simulator 0.07% 121.3MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 76 3da3a3116834 mariadb 0.02% 102MiB / 31.41GiB 0.32% 934kB / 1.18MB 11MB / 57MB 40 b174b6fc7f4a prometheus 0.22% 18.21MiB / 31.41GiB 0.06% 1.52kB / 432B 225kB / 0B 12 + echo + cd /tmp/tmp.K0sYyH3Udx + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.K0sYyH3Udx/output.xml Log: /tmp/tmp.K0sYyH3Udx/log.html Report: /tmp/tmp.K0sYyH3Udx/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:45 up 6 min, 0 users, load average: 0.59, 0.92, 0.46 Tasks: 196 total, 1 running, 128 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.1 id, 3.9 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0f05ca022347 policy-apex-pdp 0.39% 179.9MiB / 31.41GiB 0.56% 55.2kB / 79kB 0B / 0B 52 cd968b74ff1a policy-pap 0.67% 501.7MiB / 31.41GiB 1.56% 2.47MB / 1.05MB 0B / 149MB 67 9fd3e2c8b405 kafka 1.14% 407.3MiB / 31.41GiB 1.27% 241kB / 216kB 0B / 606kB 85 b218cb6f2c5a grafana 0.06% 57.4MiB / 31.41GiB 0.18% 20kB / 4.53kB 0B / 24.9MB 14 98c23f0e8294 policy-api 0.10% 471.7MiB / 31.41GiB 1.47% 2.45MB / 1.1MB 0B / 0B 55 00ec3651d3eb zookeeper 0.10% 101.2MiB / 31.41GiB 0.31% 59.3kB / 52.7kB 0B / 479kB 60 1bff2dcb8737 simulator 0.07% 121.5MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 78 3da3a3116834 mariadb 0.02% 103.2MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 57.2MB 28 b174b6fc7f4a prometheus 0.00% 24.77MiB / 31.41GiB 0.08% 180kB / 10.1kB 225kB / 0B 12 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, zookeeper, simulator, mariadb, prometheus grafana | logger=settings t=2024-04-18T23:14:21.948593789Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-18T23:14:21Z grafana | logger=settings t=2024-04-18T23:14:21.948825262Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-18T23:14:21.948831762Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-18T23:14:21.948836412Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-18T23:14:21.948839403Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-18T23:14:21.948842043Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-18T23:14:21.948844843Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-18T23:14:21.948847703Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-18T23:14:21.948869924Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-18T23:14:21.948874775Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-18T23:14:21.948877955Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-18T23:14:21.948880905Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-18T23:14:21.948884275Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-18T23:14:21.948895036Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-18T23:14:21.948899316Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-18T23:14:21.948902576Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-18T23:14:21.948905786Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-18T23:14:21.948909897Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-18T23:14:21.948917917Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-18T23:14:21.949205683Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-18T23:14:21.949225784Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-18T23:14:21.949846288Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-18T23:14:21.950749728Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-18T23:14:21.951558762Z level=info msg="Migration successfully executed" id="create migration_log table" duration=808.674µs grafana | logger=migrator t=2024-04-18T23:14:21.955916922Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-18T23:14:21.956919867Z level=info msg="Migration successfully executed" id="create user table" duration=1.002735ms grafana | logger=migrator t=2024-04-18T23:14:21.962104352Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-18T23:14:21.963251635Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.143183ms grafana | logger=migrator t=2024-04-18T23:14:21.968471732Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-18T23:14:21.969632935Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.160124ms grafana | logger=migrator t=2024-04-18T23:14:21.981243163Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-18T23:14:21.9822732Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.029017ms grafana | logger=migrator t=2024-04-18T23:14:21.988995719Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-18T23:14:21.989702908Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=706.879µs grafana | logger=migrator t=2024-04-18T23:14:21.993655185Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-18T23:14:21.996227497Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.571832ms grafana | logger=migrator t=2024-04-18T23:14:22.000743775Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-18T23:14:22.0015625Z level=info msg="Migration successfully executed" id="create user table v2" duration=818.555µs grafana | logger=migrator t=2024-04-18T23:14:22.007260372Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-18T23:14:22.008342346Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.081474ms grafana | logger=migrator t=2024-04-18T23:14:22.016248729Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-18T23:14:22.017380795Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.139216ms grafana | logger=migrator t=2024-04-18T23:14:22.023295894Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-18T23:14:22.023785842Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=488.708µs grafana | logger=migrator t=2024-04-18T23:14:22.027546268Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-18T23:14:22.028008644Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=462.307µs grafana | logger=migrator t=2024-04-18T23:14:22.035502214Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-18T23:14:22.03718292Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.680246ms grafana | logger=migrator t=2024-04-18T23:14:22.050798101Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-18T23:14:22.050846373Z level=info msg="Migration successfully executed" id="Update user table charset" duration=49.432µs grafana | logger=migrator t=2024-04-18T23:14:22.057483124Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-18T23:14:22.058658481Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.175107ms grafana | logger=migrator t=2024-04-18T23:14:22.063394383Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-18T23:14:22.063642507Z level=info msg="Migration successfully executed" id="Add missing user data" duration=252.374µs grafana | logger=migrator t=2024-04-18T23:14:22.068087322Z level=info msg="Executing migration" id="Add is_disabled column to user" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-18 23:14:27,292] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:host.name=9fd3e2c8b405 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,294] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,296] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,299] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-18 23:14:27,303] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-18 23:14:27,310] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:27,326] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:27,327] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:27,335] INFO Socket connection established, initiating session, client: /172.17.0.9:44464, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:27,372] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d8b60000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:27,492] INFO Session: 0x1000003d8b60000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:27,492] INFO EventThread shut down for session: 0x1000003d8b60000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-04-18 23:14:28,190] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-18 23:14:28,511] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-18 23:14:28,579] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-18 23:14:28,580] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-18 23:14:28,580] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-18 23:14:28,592] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-18 23:14:28,596] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:host.name=9fd3e2c8b405 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,597] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,598] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-18 23:14:28,602] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-18 23:14:28,608] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:28,609] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-18 23:14:28,614] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:28,623] INFO Socket connection established, initiating session, client: /172.17.0.9:44466, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:28,635] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d8b60001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-18 23:14:28,639] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-18 23:14:28,941] INFO Cluster ID = 3CcxO9QMSqWFRVbl82UfdQ (kafka.server.KafkaServer) kafka | [2024-04-18 23:14:28,946] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-04-18 23:14:28,997] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-apex-pdp | Waiting for mariadb port 3306... kafka | quota.window.num = 11 grafana | logger=migrator t=2024-04-18T23:14:22.069335684Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.247791ms grafana | logger=migrator t=2024-04-18T23:14:22.074188232Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-db-migrator | Waiting for mariadb port 3306... kafka | quota.window.size.seconds = 1 policy-api | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-04-18T23:14:22.075167158Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=977.516µs mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' policy-apex-pdp | Waiting for kafka port 9092... policy-pap | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... grafana | logger=migrator t=2024-04-18T23:14:22.079366649Z level=info msg="Executing migration" id="Add is_service_account column to user" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json zookeeper | ===> User mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-apex-pdp | kafka (172.17.0.9:9092) open policy-pap | mariadb (172.17.0.2:3306) open policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.manager.task.interval.ms = 30000 policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml grafana | logger=migrator t=2024-04-18T23:14:22.081153001Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.785602ms simulator | overriding logback.xml zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Initializing database files policy-apex-pdp | Waiting for pap port 6969... policy-pap | Waiting for kafka port 9092... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-api | policy-api | . ____ _ __ _ _ grafana | logger=migrator t=2024-04-18T23:14:22.089298648Z level=info msg="Executing migration" id="Update is_service_account column to nullable" simulator | 2024-04-18 23:14:19,090 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json zookeeper | ===> Configuring ... mariadb | 2024-04-18 23:14:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | kafka (172.17.0.9:9092) open policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d grafana | logger=migrator t=2024-04-18T23:14:22.102495885Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.197527ms simulator | 2024-04-18 23:14:19,158 INFO org.onap.policy.models.simulators starting zookeeper | ===> Running preflight checks ... mariadb | 2024-04-18 23:14:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | Waiting for api port 6969... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.manager.task.retry.jitter = 0.2 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" grafana | logger=migrator t=2024-04-18T23:14:22.114738997Z level=info msg="Executing migration" id="Add uid column to user" simulator | 2024-04-18 23:14:19,159 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... mariadb | 2024-04-18 23:14:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | api (172.17.0.7:6969) open policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused kafka | remote.log.manager.thread.pool.size = 10 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" grafana | logger=migrator t=2024-04-18T23:14:22.116500048Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.760911ms simulator | 2024-04-18 23:14:19,346 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... mariadb | policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" grafana | logger=migrator t=2024-04-18T23:14:22.123789406Z level=info msg="Executing migration" id="Update uid column values for users" simulator | 2024-04-18 23:14:19,347 INFO org.onap.policy.models.simulators starting A&AI simulator zookeeper | ===> Launching ... mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: policy-apex-pdp | pap (172.17.0.10:6969) open policy-db-migrator | 321 blocks policy-api | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" grafana | logger=migrator t=2024-04-18T23:14:22.123964766Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=175.39µs simulator | 2024-04-18 23:14:19,447 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | ===> Launching zookeeper ... policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml mariadb | mariadb | '/usr/bin/mysql_secure_installation' policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-db-migrator | Preparing upgrade release version: 0800 policy-api | :: Spring Boot :: (v3.1.10) prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" grafana | logger=migrator t=2024-04-18T23:14:22.128243371Z level=info msg="Executing migration" id="Add unique index user_uid" simulator | 2024-04-18 23:14:19,457 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-18 23:14:24,543] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json mariadb | mariadb | which will also give you the option of removing the test policy-apex-pdp | [2024-04-18T23:14:55.895+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-18T23:14:56.042+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-api | prometheus | ts=2024-04-18T23:14:19.742Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 grafana | logger=migrator t=2024-04-18T23:14:22.129077449Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=828.578µs simulator | 2024-04-18 23:14:19,460 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-18 23:14:24,551] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. policy-db-migrator | Preparing upgrade release version: 0900 policy-apex-pdp | allow.auto.create.topics = true policy-api | [2024-04-18T23:14:31.694+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final prometheus | ts=2024-04-18T23:14:19.743Z caller=main.go:1129 level=info msg="Starting TSDB ..." grafana | logger=migrator t=2024-04-18T23:14:22.134080056Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" simulator | 2024-04-18 23:14:19,464 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 zookeeper | [2024-04-18 23:14:24,551] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | . ____ _ __ _ _ kafka | remote.log.metadata.custom.metadata.max.bytes = 128 mariadb | policy-db-migrator | Preparing upgrade release version: 1000 policy-apex-pdp | auto.commit.interval.ms = 5000 policy-api | [2024-04-18T23:14:31.759+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) prometheus | ts=2024-04-18T23:14:19.744Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 grafana | logger=migrator t=2024-04-18T23:14:22.134595925Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=509.149µs simulator | 2024-04-18 23:14:19,523 INFO Session workerName=node0 zookeeper | [2024-04-18 23:14:24,551] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb policy-db-migrator | Preparing upgrade release version: 1100 policy-apex-pdp | auto.include.jmx.reporter = true policy-api | [2024-04-18T23:14:31.760+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" prometheus | ts=2024-04-18T23:14:19.744Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 grafana | logger=migrator t=2024-04-18T23:14:22.138257395Z level=info msg="Executing migration" id="create temp user table v1-7" simulator | 2024-04-18 23:14:20,080 INFO Using GSON for REST calls zookeeper | [2024-04-18 23:14:24,551] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ kafka | remote.log.metadata.manager.class.path = null mariadb | policy-db-migrator | Preparing upgrade release version: 1200 policy-apex-pdp | auto.offset.reset = latest policy-api | [2024-04-18T23:14:33.688+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. prometheus | ts=2024-04-18T23:14:19.747Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" grafana | logger=migrator t=2024-04-18T23:14:22.139230691Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=973.006µs simulator | 2024-04-18 23:14:20,173 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} zookeeper | [2024-04-18 23:14:24,553] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. mariadb | Please report any problems at https://mariadb.org/jira policy-db-migrator | Preparing upgrade release version: 1300 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-api | [2024-04-18T23:14:33.770+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 72 ms. Found 6 JPA repository interfaces. grafana | logger=migrator t=2024-04-18T23:14:22.143622483Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" simulator | 2024-04-18 23:14:20,182 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} zookeeper | [2024-04-18 23:14:24,553] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / kafka | remote.log.metadata.manager.listener.name = null mariadb | policy-db-migrator | Done policy-apex-pdp | check.crcs = true prometheus | ts=2024-04-18T23:14:19.748Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.911µs policy-api | [2024-04-18T23:14:34.219+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-04-18T23:14:22.144706445Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.083522ms simulator | 2024-04-18 23:14:20,191 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1552ms zookeeper | [2024-04-18 23:14:24,553] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) policy-pap | =========|_|==============|___/=/_/_/_/ kafka | remote.log.reader.max.pending.tasks = 100 mariadb | The latest information about MariaDB is available at https://mariadb.org/. policy-db-migrator | name version policy-apex-pdp | client.dns.lookup = use_all_dns_ips prometheus | ts=2024-04-18T23:14:19.748Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-api | [2024-04-18T23:14:34.220+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-04-18T23:14:22.152136951Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" simulator | 2024-04-18 23:14:20,192 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4268 ms. zookeeper | [2024-04-18 23:14:24,553] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) policy-pap | :: Spring Boot :: (v3.1.10) kafka | remote.log.reader.threads = 10 mariadb | policy-db-migrator | policyadmin 0 policy-apex-pdp | client.id = consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-1 prometheus | ts=2024-04-18T23:14:19.753Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-api | [2024-04-18T23:14:34.846+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) grafana | logger=migrator t=2024-04-18T23:14:22.154549889Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=2.414878ms simulator | 2024-04-18 23:14:20,197 INFO org.onap.policy.models.simulators starting SDNC simulator zookeeper | [2024-04-18 23:14:24,554] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) policy-pap | kafka | remote.log.storage.manager.class.name = null mariadb | Consider joining MariaDB's strong and vibrant community: policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-apex-pdp | client.rack = prometheus | ts=2024-04-18T23:14:19.753Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=138.018µs wal_replay_duration=4.941193ms wbl_replay_duration=310ns total_replay_duration=5.106483ms policy-api | [2024-04-18T23:14:34.857+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-04-18T23:14:22.160077106Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" simulator | 2024-04-18 23:14:20,199 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper | [2024-04-18 23:14:24,555] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-18T23:14:44.945+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final kafka | remote.log.storage.manager.class.path = null mariadb | https://mariadb.org/get-involved/ policy-db-migrator | upgrade: 0 -> 1300 policy-apex-pdp | connections.max.idle.ms = 540000 prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC policy-api | [2024-04-18T23:14:34.859+00:00|INFO|StandardService|main] Starting service [Tomcat] grafana | logger=migrator t=2024-04-18T23:14:22.161509699Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.431382ms simulator | 2024-04-18 23:14:20,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-18 23:14:24,555] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-18T23:14:44.997+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) kafka | remote.log.storage.manager.impl.prefix = rsm.config. mariadb | policy-db-migrator | policy-apex-pdp | default.api.timeout.ms = 60000 prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1153 level=info msg="TSDB started" policy-api | [2024-04-18T23:14:34.859+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] grafana | logger=migrator t=2024-04-18T23:14:22.16675983Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" simulator | 2024-04-18 23:14:20,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-04-18 23:14:24,555] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-18T23:14:44.998+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" kafka | remote.log.storage.system.enable = false mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Database files initialized policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-apex-pdp | enable.auto.commit = true prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-api | [2024-04-18T23:14:34.959+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext grafana | logger=migrator t=2024-04-18T23:14:22.167535254Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=775.625µs simulator | 2024-04-18 23:14:20,201 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 zookeeper | [2024-04-18 23:14:24,555] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-18T23:14:47.035+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. kafka | replica.fetch.backoff.ms = 1000 mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Starting temporary server policy-db-migrator | -------------- policy-apex-pdp | exclude.internal.topics = true prometheus | ts=2024-04-18T23:14:19.757Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=853.828µs db_storage=1.751µs remote_storage=2.35µs web_handler=710ns query_engine=840ns scrape=250.214µs scrape_sd=137.677µs notify=23.822µs notify_sd=8.09µs rules=2.4µs tracing=5.571µs policy-api | [2024-04-18T23:14:34.960+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3124 ms grafana | logger=migrator t=2024-04-18T23:14:22.171989869Z level=info msg="Executing migration" id="Update temp_user table charset" simulator | 2024-04-18 23:14:20,212 INFO Session workerName=node0 zookeeper | [2024-04-18 23:14:24,555] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) policy-pap | [2024-04-18T23:14:47.151+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 106 ms. Found 7 JPA repository interfaces. kafka | replica.fetch.max.bytes = 1048576 mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Waiting for server startup policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | fetch.max.bytes = 52428800 prometheus | ts=2024-04-18T23:14:19.757Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." policy-api | [2024-04-18T23:14:35.395+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] grafana | logger=migrator t=2024-04-18T23:14:22.172036782Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=53.563µs simulator | 2024-04-18 23:14:20,279 INFO Using GSON for REST calls zookeeper | [2024-04-18 23:14:24,555] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) policy-pap | [2024-04-18T23:14:47.623+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | replica.fetch.min.bytes = 1 mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 101 ... policy-db-migrator | -------------- policy-apex-pdp | fetch.max.wait.ms = 500 prometheus | ts=2024-04-18T23:14:19.757Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." policy-api | [2024-04-18T23:14:35.467+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final grafana | logger=migrator t=2024-04-18T23:14:22.181576189Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" simulator | 2024-04-18 23:14:20,290 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} zookeeper | [2024-04-18 23:14:24,569] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) policy-pap | [2024-04-18T23:14:47.624+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | replica.fetch.response.max.bytes = 10485760 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-db-migrator | policy-apex-pdp | fetch.min.bytes = 1 policy-api | [2024-04-18T23:14:35.513+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled grafana | logger=migrator t=2024-04-18T23:14:22.182907385Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.363188ms simulator | 2024-04-18 23:14:20,291 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} zookeeper | [2024-04-18 23:14:24,572] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | [2024-04-18T23:14:48.227+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) kafka | replica.fetch.wait.max.ms = 500 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Number of transaction pools: 1 policy-db-migrator | policy-apex-pdp | group.id = dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 policy-api | [2024-04-18T23:14:35.809+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer grafana | logger=migrator t=2024-04-18T23:14:22.189284681Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" simulator | 2024-04-18 23:14:20,291 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1652ms zookeeper | [2024-04-18 23:14:24,572] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) policy-pap | [2024-04-18T23:14:48.237+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 policy-apex-pdp | group.instance.id = null policy-api | [2024-04-18T23:14:35.839+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... grafana | logger=migrator t=2024-04-18T23:14:22.190057875Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=773.244µs grafana | logger=migrator t=2024-04-18T23:14:22.193034056Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" simulator | 2024-04-18 23:14:20,291 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4909 ms. policy-pap | [2024-04-18T23:14:48.239+00:00|INFO|StandardService|main] Starting service [Tomcat] mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 policy-apex-pdp | heartbeat.interval.ms = 3000 policy-api | [2024-04-18T23:14:35.942+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f11f64e grafana | logger=migrator t=2024-04-18T23:14:22.193755057Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=722.461µs grafana | logger=migrator t=2024-04-18T23:14:22.198373472Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" simulator | 2024-04-18 23:14:20,292 INFO org.onap.policy.models.simulators starting SO simulator policy-pap | [2024-04-18T23:14:48.239+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] mariadb | 2024-04-18 23:14:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 policy-apex-pdp | interceptor.classes = [] policy-api | [2024-04-18T23:14:35.944+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=migrator t=2024-04-18T23:14:22.19904209Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=668.518µs grafana | logger=migrator t=2024-04-18T23:14:22.201392445Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" simulator | 2024-04-18 23:14:20,294 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | [2024-04-18T23:14:48.344+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext mariadb | 2024-04-18 23:14:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 policy-apex-pdp | internal.leave.group.on.close = true policy-api | [2024-04-18T23:14:37.897+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) grafana | logger=migrator t=2024-04-18T23:14:22.204673713Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.280168ms grafana | logger=migrator t=2024-04-18T23:14:22.207899928Z level=info msg="Executing migration" id="create temp_user v2" simulator | 2024-04-18 23:14:20,295 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-04-18T23:14:48.344+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3275 ms mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-api | [2024-04-18T23:14:37.900+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' grafana | logger=migrator t=2024-04-18T23:14:22.208754317Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=854.529µs grafana | logger=migrator t=2024-04-18T23:14:22.216193754Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" simulator | 2024-04-18 23:14:20,295 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-04-18T23:14:48.737+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Completed initialization of buffer pool policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-apex-pdp | isolation.level = read_uncommitted policy-api | [2024-04-18T23:14:38.908+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml grafana | logger=migrator t=2024-04-18T23:14:22.217475347Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.270963ms grafana | logger=migrator t=2024-04-18T23:14:22.22467172Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" simulator | 2024-04-18 23:14:20,296 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-pap | [2024-04-18T23:14:48.789+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-db-migrator | policy-db-migrator | policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-api | [2024-04-18T23:14:39.758+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] grafana | logger=migrator t=2024-04-18T23:14:22.225941223Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.269713ms grafana | logger=migrator t=2024-04-18T23:14:22.232975336Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" simulator | 2024-04-18 23:14:20,302 INFO Session workerName=node0 policy-pap | [2024-04-18T23:14:49.143+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: 128 rollback segments are active. policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-api | [2024-04-18T23:14:40.909+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-04-18T23:14:22.234200596Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.22508ms grafana | logger=migrator t=2024-04-18T23:14:22.238680463Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" simulator | 2024-04-18 23:14:20,359 INFO Using GSON for REST calls policy-pap | [2024-04-18T23:14:49.238+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@14982a82 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-apex-pdp | max.poll.interval.ms = 300000 policy-api | [2024-04-18T23:14:41.117+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2e5f860b, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@42ca6733, org.springframework.security.web.context.SecurityContextHolderFilter@16d52e51, org.springframework.security.web.header.HeaderWriterFilter@6d5934f6, org.springframework.security.web.authentication.logout.LogoutFilter@59ea4ca5, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2489ee11, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2f643a10, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@315a9738, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@452d71e5, org.springframework.security.web.access.ExceptionTranslationFilter@46756a5b, org.springframework.security.web.access.intercept.AuthorizationFilter@1c537671] grafana | logger=migrator t=2024-04-18T23:14:22.239877192Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.195169ms grafana | logger=migrator t=2024-04-18T23:14:22.244527548Z level=info msg="Executing migration" id="copy temp_user v1 to v2" simulator | 2024-04-18 23:14:20,371 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} policy-pap | [2024-04-18T23:14:49.240+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-db-migrator | policy-db-migrator | policy-apex-pdp | max.poll.records = 500 policy-api | [2024-04-18T23:14:41.965+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' grafana | logger=migrator t=2024-04-18T23:14:22.245172715Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=644.807µs grafana | logger=migrator t=2024-04-18T23:14:22.248244292Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" simulator | 2024-04-18 23:14:20,372 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-pap | [2024-04-18T23:14:49.269+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: log sequence number 46590; transaction id 14 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-apex-pdp | metadata.max.age.ms = 300000 policy-api | [2024-04-18T23:14:42.065+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-04-18T23:14:22.248758671Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=514.21µs grafana | logger=migrator t=2024-04-18T23:14:22.251556942Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" simulator | 2024-04-18 23:14:20,372 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1734ms policy-pap | [2024-04-18T23:14:50.759+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] mariadb | 2024-04-18 23:14:18 0 [Note] Plugin 'FEEDBACK' is disabled. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] policy-api | [2024-04-18T23:14:42.111+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' grafana | logger=migrator t=2024-04-18T23:14:22.251969345Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=412.514µs grafana | logger=migrator t=2024-04-18T23:14:22.256382328Z level=info msg="Executing migration" id="create star table" simulator | 2024-04-18 23:14:20,372 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. policy-pap | [2024-04-18T23:14:50.770+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' mariadb | 2024-04-18 23:14:18 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-db-migrator | policy-db-migrator | policy-apex-pdp | metrics.num.samples = 2 policy-api | [2024-04-18T23:14:42.129+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.119 seconds (process running for 11.74) grafana | logger=migrator t=2024-04-18T23:14:22.257506703Z level=info msg="Migration successfully executed" id="create star table" duration=1.113524ms grafana | logger=migrator t=2024-04-18T23:14:22.260836004Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" simulator | 2024-04-18 23:14:20,373 INFO org.onap.policy.models.simulators starting VFC simulator policy-pap | [2024-04-18T23:14:51.285+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository mariadb | 2024-04-18 23:14:18 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-apex-pdp | metrics.recording.level = INFO policy-api | [2024-04-18T23:14:58.655+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-18T23:14:22.26217279Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.335057ms grafana | logger=migrator t=2024-04-18T23:14:22.265063006Z level=info msg="Executing migration" id="create org table v1" simulator | 2024-04-18 23:14:20,375 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | [2024-04-18T23:14:51.701+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository mariadb | 2024-04-18 23:14:18 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 policy-api | [2024-04-18T23:14:58.655+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-18T23:14:22.265863062Z level=info msg="Migration successfully executed" id="create org table v1" duration=799.456µs grafana | logger=migrator t=2024-04-18T23:14:22.270683988Z level=info msg="Executing migration" id="create index UQE_org_name - v1" simulator | 2024-04-18 23:14:20,376 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-04-18T23:14:51.843+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd: ready for connections. policy-db-migrator | policy-db-migrator | policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-api | [2024-04-18T23:14:58.657+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms grafana | logger=migrator t=2024-04-18T23:14:22.271598591Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=912.742µs grafana | logger=migrator t=2024-04-18T23:14:22.27508334Z level=info msg="Executing migration" id="create org_user table v1" simulator | 2024-04-18 23:14:20,377 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-04-18T23:14:52.157+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-apex-pdp | receive.buffer.bytes = 65536 policy-api | [2024-04-18T23:14:59.023+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-18T23:14:22.276342363Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.257642ms grafana | logger=migrator t=2024-04-18T23:14:22.27961576Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" simulator | 2024-04-18 23:14:20,378 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-pap | allow.auto.create.topics = true mariadb | 2024-04-18 23:14:19+00:00 [Note] [Entrypoint]: Temporary server started. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-api | [] grafana | logger=migrator t=2024-04-18T23:14:22.280796968Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.170487ms grafana | logger=migrator t=2024-04-18T23:14:22.284101337Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" simulator | 2024-04-18 23:14:20,385 INFO Session workerName=node0 policy-pap | auto.commit.interval.ms = 5000 mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: Creating user policy_user policy-db-migrator | policy-db-migrator | policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-18T23:14:22.284893483Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=792.296µs grafana | logger=migrator t=2024-04-18T23:14:22.295529363Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" simulator | 2024-04-18 23:14:20,430 INFO Using GSON for REST calls policy-pap | auto.include.jmx.reporter = true mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-apex-pdp | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:22.297289954Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.756421ms grafana | logger=migrator t=2024-04-18T23:14:22.299984858Z level=info msg="Executing migration" id="Update org table charset" simulator | 2024-04-18 23:14:20,438 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} policy-pap | auto.offset.reset = latest mariadb | policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:22.30001188Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.562µs grafana | logger=migrator t=2024-04-18T23:14:22.304569081Z level=info msg="Executing migration" id="Update org_user table charset" simulator | 2024-04-18 23:14:20,439 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-pap | bootstrap.servers = [kafka:9092] mariadb | policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:22.304593742Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.391µs grafana | logger=migrator t=2024-04-18T23:14:22.307233154Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" simulator | 2024-04-18 23:14:20,440 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1801ms policy-pap | check.crcs = true mariadb | 2024-04-18 23:14:21+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-18T23:14:22.30751726Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=285.006µs grafana | logger=migrator t=2024-04-18T23:14:22.311657507Z level=info msg="Executing migration" id="create dashboard table" simulator | 2024-04-18 23:14:20,440 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4937 ms. policy-pap | client.dns.lookup = use_all_dns_ips mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit zookeeper | [2024-04-18 23:14:24,577] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) grafana | logger=migrator t=2024-04-18T23:14:22.312997294Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.342237ms simulator | 2024-04-18 23:14:20,441 INFO org.onap.policy.models.simulators started policy-pap | client.id = consumer-deefd98f-1600-442c-a15a-d2ceba267151-1 mariadb | #!/bin/bash -xv kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 zookeeper | [2024-04-18 23:14:24,590] INFO (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.315843987Z level=info msg="Executing migration" id="add index dashboard.account_id" policy-pap | client.rack = mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null zookeeper | [2024-04-18 23:14:24,590] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.316596561Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=752.584µs policy-pap | connections.max.idle.ms = 540000 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 zookeeper | [2024-04-18 23:14:24,590] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.319234322Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" policy-pap | default.api.timeout.ms = 60000 mariadb | # kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 zookeeper | [2024-04-18 23:14:24,590] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.320119853Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=885.651µs policy-pap | enable.auto.commit = true mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null policy-apex-pdp | sasl.login.callback.handler.class = null zookeeper | [2024-04-18 23:14:24,590] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.322880321Z level=info msg="Executing migration" id="create dashboard_tag table" policy-pap | exclude.internal.topics = true mariadb | # you may not use this file except in compliance with the License. kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.class = null zookeeper | [2024-04-18 23:14:24,590] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.323515537Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=635.226µs policy-pap | fetch.max.bytes = 52428800 mariadb | # You may obtain a copy of the License at kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.connect.timeout.ms = null zookeeper | [2024-04-18 23:14:24,590] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.330476946Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" policy-pap | fetch.max.wait.ms = 500 mariadb | # kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.read.timeout.ms = null zookeeper | [2024-04-18 23:14:24,590] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.331255181Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=778.545µs policy-pap | fetch.min.bytes = 1 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 zookeeper | [2024-04-18 23:14:24,590] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.333900473Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" policy-pap | group.id = deefd98f-1600-442c-a15a-d2ceba267151 mariadb | # kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 zookeeper | [2024-04-18 23:14:24,590] INFO (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.334996456Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.111233ms policy-pap | group.instance.id = null mariadb | # Unless required by applicable law or agreed to in writing, software kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.337844689Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" policy-pap | heartbeat.interval.ms = 3000 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:host.name=00ec3651d3eb (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.342935991Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.096893ms policy-pap | interceptor.classes = [] mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.345586153Z level=info msg="Executing migration" id="create dashboard v2" policy-pap | internal.leave.group.on.close = true mariadb | # See the License for the specific language governing permissions and kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.login.retry.backoff.ms = 100 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.346337996Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=751.573µs policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false mariadb | # limitations under the License. kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | sasl.mechanism = GSSAPI zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-18T23:14:22.350798782Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-pap | isolation.level = read_uncommitted mariadb | kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-18T23:14:22.351501202Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=702.31µs zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-18T23:14:22.357675636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | max.partition.fetch.bytes = 1048576 mariadb | do kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-18T23:14:22.359035164Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.359668ms zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | max.poll.interval.ms = 300000 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-18T23:14:22.363576174Z level=info msg="Executing migration" id="copy dashboard v1 to v2" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | max.poll.records = 500 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:22.3641978Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=624.036µs zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metadata.max.age.ms = 300000 mariadb | done kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:22.367427635Z level=info msg="Executing migration" id="drop table dashboard_v1" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metric.reporters = [] mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:22.368091023Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=663.348µs zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.num.samples = 2 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-18T23:14:22.371768364Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.recording.level = INFO mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-18T23:14:22.371820187Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=52.223µs zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | metrics.sample.window.ms = 30000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:22.374389474Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-18T23:14:22.376216999Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.825735ms zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | receive.buffer.bytes = 65536 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-04-18T23:14:22.379695228Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | reconnect.backoff.max.ms = 1000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | send.buffer.bytes = 131072 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-18T23:14:22.383056211Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.359583ms mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' policy-apex-pdp | session.timeout.ms = 45000 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:22.387855776Z level=info msg="Executing migration" id="Add column gnetId in dashboard" mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:22.389125959Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.270293ms mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:22.391692986Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' policy-apex-pdp | ssl.cipher.suites = null zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-18T23:14:22.392245498Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=552.822µs mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-18T23:14:22.395517676Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | ssl.endpoint.identification.algorithm = https zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-18T23:14:22.398662066Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.143771ms mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' policy-apex-pdp | ssl.engine.factory.class = null zookeeper | [2024-04-18 23:14:24,593] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-18T23:14:22.40308992Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.key.password = null zookeeper | [2024-04-18 23:14:24,593] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:22.403867274Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=776.594µs mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | ssl.keymanager.algorithm = SunX509 zookeeper | [2024-04-18 23:14:24,595] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:22.406964102Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' policy-apex-pdp | ssl.keystore.certificate.chain = null zookeeper | [2024-04-18 23:14:24,595] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:22.407710375Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=750.613µs mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.keystore.key = null zookeeper | [2024-04-18 23:14:24,596] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-18T23:14:22.410463253Z level=info msg="Executing migration" id="Update dashboard table charset" mariadb | policy-db-migrator | policy-apex-pdp | ssl.keystore.location = null zookeeper | [2024-04-18 23:14:24,596] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:22.410488444Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.382µs kafka | ssl.client.auth = none mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" policy-db-migrator | policy-apex-pdp | ssl.keystore.password = null zookeeper | [2024-04-18 23:14:24,596] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:22.414836233Z level=info msg="Executing migration" id="Update dashboard_tag table charset" kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-apex-pdp | ssl.keystore.type = JKS zookeeper | [2024-04-18 23:14:24,596] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-18T23:14:22.414918858Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=83.945µs kafka | ssl.endpoint.identification.algorithm = https mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql policy-db-migrator | -------------- policy-apex-pdp | ssl.protocol = TLSv1.3 zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-18T23:14:22.417231291Z level=info msg="Executing migration" id="Add column folder_id in dashboard" kafka | ssl.engine.factory.class = null mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-apex-pdp | ssl.provider = null zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:22.420395712Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.163561ms kafka | ssl.key.password = null mariadb | policy-db-migrator | -------------- policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-18T23:14:22.42348988Z level=info msg="Executing migration" id="Add column isFolder in dashboard" zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | ssl.keymanager.algorithm = SunX509 mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: Stopping temporary server policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-18T23:14:22.426144982Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.654883ms zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | ssl.keystore.certificate.chain = null mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd (initiated by: unknown): Normal shutdown policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-04-18T23:14:22.429124883Z level=info msg="Executing migration" id="Add column has_acl in dashboard" zookeeper | [2024-04-18 23:14:24,599] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) kafka | ssl.keystore.key = null mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: FTS optimize thread exiting. policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-04-18T23:14:22.431147599Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.021805ms zookeeper | [2024-04-18 23:14:24,599] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) kafka | ssl.keystore.location = null mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Starting shutdown... policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-18T23:14:22.43728094Z level=info msg="Executing migration" id="Add column uid in dashboard" zookeeper | [2024-04-18 23:14:24,600] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) kafka | ssl.keystore.password = null mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-18T23:14:22.439303526Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.022666ms zookeeper | [2024-04-18 23:14:24,600] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) kafka | ssl.keystore.type = JKS mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Buffer pool(s) dump completed at 240418 23:14:22 policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | grafana | logger=migrator t=2024-04-18T23:14:22.450151438Z level=info msg="Executing migration" id="Update uid column values in dashboard" zookeeper | [2024-04-18 23:14:24,600] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) kafka | ssl.principal.mapping.rules = DEFAULT mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" policy-db-migrator | policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-18T23:14:22.450606974Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=460.786µs grafana | logger=migrator t=2024-04-18T23:14:22.460160922Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" zookeeper | [2024-04-18 23:14:24,620] INFO Logging initialized @617ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) kafka | ssl.protocol = TLSv1.3 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Shutdown completed; log sequence number 328781; transaction id 298 policy-db-migrator | policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-18T23:14:22.461778345Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.621583ms zookeeper | [2024-04-18 23:14:24,718] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) kafka | ssl.provider = null mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: Shutdown complete policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096202 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-18T23:14:22.473215811Z level=info msg="Executing migration" id="Remove unique index org_id_slug" zookeeper | [2024-04-18 23:14:24,718] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) kafka | ssl.secure.random.implementation = null mariadb | policy-db-migrator | -------------- policy-apex-pdp | [2024-04-18T23:14:56.206+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-1, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Subscribed to topic(s): policy-pdp-pap policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:22.474916838Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.701598ms zookeeper | [2024-04-18 23:14:24,738] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) kafka | ssl.trustmanager.algorithm = PKIX mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: Temporary server stopped policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-04-18T23:14:56.217+00:00|INFO|ServiceManager|main] service manager starting policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:22.480408153Z level=info msg="Executing migration" id="Update dashboard title length" zookeeper | [2024-04-18 23:14:24,768] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) kafka | ssl.truststore.certificates = null mariadb | policy-db-migrator | -------------- policy-apex-pdp | [2024-04-18T23:14:56.217+00:00|INFO|ServiceManager|main] service manager starting topics policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:22.480442815Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=36.352µs zookeeper | [2024-04-18 23:14:24,769] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) kafka | ssl.truststore.location = null mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-db-migrator | policy-apex-pdp | [2024-04-18T23:14:56.219+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-18T23:14:22.488819435Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" zookeeper | [2024-04-18 23:14:24,770] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) kafka | ssl.truststore.password = null mariadb | policy-db-migrator | policy-apex-pdp | [2024-04-18T23:14:56.244+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-18T23:14:22.490341233Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.522087ms zookeeper | [2024-04-18 23:14:24,773] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) kafka | ssl.truststore.type = JKS mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-apex-pdp | allow.auto.create.topics = true policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:22.498046304Z level=info msg="Executing migration" id="create dashboard_provisioning" zookeeper | [2024-04-18 23:14:24,782] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-db-migrator | -------------- policy-apex-pdp | auto.commit.interval.ms = 5000 policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-18T23:14:22.500663504Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.6073ms zookeeper | [2024-04-18 23:14:24,798] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) kafka | transaction.max.timeout.ms = 900000 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Number of transaction pools: 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | security.providers = null grafana | logger=migrator t=2024-04-18T23:14:22.575690026Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" zookeeper | [2024-04-18 23:14:24,798] INFO Started @795ms (org.eclipse.jetty.server.Server) kafka | transaction.partition.verification.enable = true mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-db-migrator | -------------- policy-apex-pdp | auto.offset.reset = latest policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-18T23:14:22.585482398Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=9.791922ms zookeeper | [2024-04-18 23:14:24,798] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-db-migrator | policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-18T23:14:22.591444689Z level=info msg="Executing migration" id="create dashboard_provisioning v2" zookeeper | [2024-04-18 23:14:24,802] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) kafka | transaction.state.log.load.buffer.size = 5242880 mariadb | 2024-04-18 23:14:22 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-db-migrator | policy-apex-pdp | check.crcs = true policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:22.592134849Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=684.96µs zookeeper | [2024-04-18 23:14:24,804] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) kafka | transaction.state.log.min.isr = 2 mariadb | 2024-04-18 23:14:22 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:22.595786198Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" zookeeper | [2024-04-18 23:14:24,805] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | transaction.state.log.num.partitions = 50 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-db-migrator | -------------- policy-apex-pdp | client.id = consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2 policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-18T23:14:22.597224531Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.437493ms zookeeper | [2024-04-18 23:14:24,807] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) kafka | transaction.state.log.replication.factor = 3 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Completed initialization of buffer pool policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-apex-pdp | client.rack = policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-18T23:14:22.601381989Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" zookeeper | [2024-04-18 23:14:24,823] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | transaction.state.log.segment.bytes = 104857600 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-18T23:14:22.60279171Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.409761ms policy-apex-pdp | connections.max.idle.ms = 540000 zookeeper | [2024-04-18 23:14:24,823] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) kafka | transactional.id.expiration.ms = 604800000 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: 128 rollback segments are active. policy-db-migrator | policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-18T23:14:22.607406025Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-apex-pdp | default.api.timeout.ms = 60000 zookeeper | [2024-04-18 23:14:24,824] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) kafka | unclean.leader.election.enable = false mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-db-migrator | policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-18T23:14:22.607758035Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=351.521µs policy-apex-pdp | enable.auto.commit = true zookeeper | [2024-04-18 23:14:24,824] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) kafka | unstable.api.versions.enable = false mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-18T23:14:22.612256603Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-apex-pdp | exclude.internal.topics = true zookeeper | [2024-04-18 23:14:24,829] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) kafka | zookeeper.clientCnxnSocket = null mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: log sequence number 328781; transaction id 299 policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-18T23:14:22.6134358Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.177747ms policy-apex-pdp | fetch.max.bytes = 52428800 zookeeper | [2024-04-18 23:14:24,829] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.618192983Z level=info msg="Executing migration" id="Add check_sum column" policy-apex-pdp | fetch.max.wait.ms = 500 zookeeper | [2024-04-18 23:14:24,833] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-pap | ssl.keystore.key = null kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.621766108Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.573765ms policy-apex-pdp | fetch.min.bytes = 1 zookeeper | [2024-04-18 23:14:24,833] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) mariadb | 2024-04-18 23:14:22 0 [Note] Plugin 'FEEDBACK' is disabled. policy-pap | ssl.keystore.location = null kafka | zookeeper.metadata.migration.enable = false policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.625373695Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-apex-pdp | group.id = dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 zookeeper | [2024-04-18 23:14:24,834] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-18 23:14:22 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | ssl.keystore.password = null kafka | zookeeper.metadata.migration.min.batch.size = 200 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.626362622Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=988.686µs policy-apex-pdp | group.instance.id = null zookeeper | [2024-04-18 23:14:24,843] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) mariadb | 2024-04-18 23:14:22 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-pap | ssl.keystore.type = JKS kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-apex-pdp | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-18T23:14:22.631502646Z level=info msg="Executing migration" id="delete tags for deleted dashboards" zookeeper | [2024-04-18 23:14:24,843] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) mariadb | 2024-04-18 23:14:22 0 [Note] Server socket created on IP: '0.0.0.0'. policy-pap | ssl.protocol = TLSv1.3 kafka | zookeeper.set.acl = false policy-db-migrator | -------------- policy-apex-pdp | interceptor.classes = [] grafana | logger=migrator t=2024-04-18T23:14:22.631771442Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=269.196µs zookeeper | [2024-04-18 23:14:24,856] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) mariadb | 2024-04-18 23:14:22 0 [Note] Server socket created on IP: '::'. policy-pap | ssl.provider = null kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-18T23:14:22.635007557Z level=info msg="Executing migration" id="delete stars for deleted dashboards" zookeeper | [2024-04-18 23:14:24,857] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: ready for connections. policy-pap | ssl.secure.random.implementation = null kafka | zookeeper.ssl.client.enable = false policy-db-migrator | -------------- policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-18T23:14:22.635210519Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=203.112µs zookeeper | [2024-04-18 23:14:27,353] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-pap | ssl.trustmanager.algorithm = PKIX kafka | zookeeper.ssl.crl.enable = false policy-db-migrator | policy-apex-pdp | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-18T23:14:22.638253463Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Buffer pool(s) load completed at 240418 23:14:22 policy-pap | ssl.truststore.certificates = null kafka | zookeeper.ssl.enabled.protocols = null policy-db-migrator | policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-18T23:14:22.639175016Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=921.433µs mariadb | 2024-04-18 23:14:23 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-pap | ssl.truststore.location = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-apex-pdp | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-18T23:14:22.643379607Z level=info msg="Executing migration" id="Add isPublic for dashboard" mariadb | 2024-04-18 23:14:23 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) policy-pap | ssl.truststore.password = null kafka | zookeeper.ssl.keystore.location = null policy-db-migrator | -------------- policy-apex-pdp | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-18T23:14:22.645719801Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.339324ms mariadb | 2024-04-18 23:14:24 35 [Warning] Aborted connection 35 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-pap | ssl.truststore.type = JKS kafka | zookeeper.ssl.keystore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | max.poll.records = 500 grafana | logger=migrator t=2024-04-18T23:14:22.663787917Z level=info msg="Executing migration" id="create data_source table" mariadb | 2024-04-18 23:14:25 82 [Warning] Aborted connection 82 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | zookeeper.ssl.keystore.type = null policy-db-migrator | -------------- policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-18T23:14:22.665854996Z level=info msg="Migration successfully executed" id="create data_source table" duration=2.071159ms policy-pap | kafka | zookeeper.ssl.ocsp.enable = false policy-db-migrator | policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-04-18T23:14:22.670079748Z level=info msg="Executing migration" id="add index data_source.account_id" policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | zookeeper.ssl.protocol = TLSv1.2 policy-db-migrator | policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-18T23:14:22.670750417Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=669.548µs policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | zookeeper.ssl.truststore.location = null policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-apex-pdp | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-18T23:14:22.673061129Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482092339 kafka | zookeeper.ssl.truststore.password = null policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:22.674236976Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.173767ms policy-pap | [2024-04-18T23:14:52.343+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-1, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Subscribed to topic(s): policy-pdp-pap kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | [2024-04-18T23:14:52.344+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | (kafka.server.KafkaConfig) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.703463192Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-apex-pdp | receive.buffer.bytes = 65536 policy-pap | allow.auto.create.topics = true kafka | [2024-04-18 23:14:29,030] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.704787168Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.315805ms policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-18 23:14:29,031] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.739471097Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-18 23:14:29,032] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql grafana | logger=migrator t=2024-04-18T23:14:22.7411051Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.639134ms policy-apex-pdp | request.timeout.ms = 30000 policy-pap | auto.offset.reset = latest kafka | [2024-04-18 23:14:29,036] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.746635238Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" policy-apex-pdp | retry.backoff.ms = 100 policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-18 23:14:29,070] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.75329989Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.664022ms policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | check.crcs = true kafka | [2024-04-18 23:14:29,076] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.756751058Z level=info msg="Executing migration" id="create data_source table v2" policy-apex-pdp | sasl.jaas.config = null policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-18 23:14:29,086] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.757916364Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.163177ms policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | client.id = consumer-policy-pap-2 kafka | [2024-04-18 23:14:29,087] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.760614629Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | client.rack = kafka | [2024-04-18 23:14:29,089] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql grafana | logger=migrator t=2024-04-18T23:14:22.761256886Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=642.087µs policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-18 23:14:29,099] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.766931961Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-18 23:14:29,144] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.767875905Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=942.784µs policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | enable.auto.commit = true kafka | [2024-04-18 23:14:29,162] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.771071339Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | exclude.internal.topics = true kafka | [2024-04-18 23:14:29,181] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.771690174Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=618.026µs policy-apex-pdp | sasl.login.class = null policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-18 23:14:29,221] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.774280623Z level=info msg="Executing migration" id="Add column with_credentials" policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-18 23:14:29,549] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql grafana | logger=migrator t=2024-04-18T23:14:22.776736043Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.455041ms policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | fetch.min.bytes = 1 kafka | [2024-04-18 23:14:29,576] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-18T23:14:22.783091988Z level=info msg="Executing migration" id="Add secure json data column" policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | group.id = policy-pap policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,577] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-04-18T23:14:22.787502881Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.409523ms policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.791084136Z level=info msg="Executing migration" id="Update data_source table charset" kafka | [2024-04-18 23:14:29,582] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.791115748Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=31.742µs kafka | [2024-04-18 23:14:29,587] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | interceptor.classes = [] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.794773108Z level=info msg="Executing migration" id="Update initial version to 1" kafka | [2024-04-18 23:14:29,612] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | internal.leave.group.on.close = true policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.795364541Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=598.424µs kafka | [2024-04-18 23:14:29,614] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql grafana | logger=migrator t=2024-04-18T23:14:22.798907425Z level=info msg="Executing migration" id="Add read_only data column" kafka | [2024-04-18 23:14:29,615] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.801681434Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.775039ms kafka | [2024-04-18 23:14:29,618] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.806128549Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" kafka | [2024-04-18 23:14:29,621] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.806366932Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=238.303µs kafka | [2024-04-18 23:14:29,635] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.808516556Z level=info msg="Executing migration" id="Update json_data with nulls" kafka | [2024-04-18 23:14:29,641] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | max.poll.records = 500 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.808697996Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=185.581µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-18T23:14:22.811615283Z level=info msg="Executing migration" id="Add uid column" kafka | [2024-04-18 23:14:29,664] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.813985759Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.369736ms kafka | [2024-04-18 23:14:29,691] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713482069680,1713482069680,1,0,0,72057610558636033,258,0,27 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | metrics.num.samples = 2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.818910301Z level=info msg="Executing migration" id="Update uid value" kafka | (kafka.zk.KafkaZkClient) policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.819137125Z level=info msg="Migration successfully executed" id="Update uid value" duration=226.703µs kafka | [2024-04-18 23:14:29,692] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.820934858Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" kafka | [2024-04-18 23:14:29,746] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.82184732Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=907.903µs kafka | [2024-04-18 23:14:29,752] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql grafana | logger=migrator t=2024-04-18T23:14:22.82482138Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" kafka | [2024-04-18 23:14:29,760] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | security.providers = null policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.82569461Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=872.69µs kafka | [2024-04-18 23:14:29,760] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.830934321Z level=info msg="Executing migration" id="create api_key table" kafka | [2024-04-18 23:14:29,773] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-apex-pdp | session.timeout.ms = 45000 policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.831755658Z level=info msg="Migration successfully executed" id="create api_key table" duration=820.847µs kafka | [2024-04-18 23:14:29,775] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.834618712Z level=info msg="Executing migration" id="add index api_key.account_id" kafka | [2024-04-18 23:14:29,786] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.835487922Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=866.43µs kafka | [2024-04-18 23:14:29,787] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | ssl.cipher.suites = null policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql kafka | [2024-04-18 23:14:29,793] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.838297593Z level=info msg="Executing migration" id="add index api_key.key" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,798] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-04-18T23:14:22.839139471Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=841.238µs policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-04-18 23:14:29,819] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-04-18T23:14:22.844722871Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-apex-pdp | ssl.engine.factory.class = null policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,823] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-04-18T23:14:22.845690117Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=929.504µs policy-apex-pdp | ssl.key.password = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-04-18 23:14:29,823] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-04-18T23:14:22.852555071Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | kafka | [2024-04-18 23:14:29,826] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) grafana | logger=migrator t=2024-04-18T23:14:22.85341627Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=862.83µs policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | [2024-04-18 23:14:29,826] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.858399056Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-apex-pdp | ssl.keystore.key = null policy-pap | sasl.login.class = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,837] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.859602855Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.202109ms policy-apex-pdp | ssl.keystore.location = null policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) kafka | [2024-04-18 23:14:29,842] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.869335263Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-apex-pdp | ssl.keystore.password = null policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,846] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.870955586Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.627443ms policy-apex-pdp | ssl.keystore.type = JKS policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | kafka | [2024-04-18 23:14:29,860] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-18T23:14:22.874439775Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | kafka | [2024-04-18 23:14:29,867] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.881715263Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.278767ms policy-apex-pdp | ssl.provider = null policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | [2024-04-18 23:14:29,874] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.88499407Z level=info msg="Executing migration" id="create api_key table v2" policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,881] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) grafana | logger=migrator t=2024-04-18T23:14:22.885906303Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=912.073µs policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) kafka | [2024-04-18 23:14:29,886] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-04-18T23:14:22.890421662Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-apex-pdp | ssl.truststore.certificates = null policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,893] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-04-18T23:14:22.891327394Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=906.122µs policy-apex-pdp | ssl.truststore.location = null policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | kafka | [2024-04-18 23:14:29,894] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.895115071Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-apex-pdp | ssl.truststore.password = null policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.896719393Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.604542ms policy-apex-pdp | ssl.truststore.type = JKS policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql grafana | logger=migrator t=2024-04-18T23:14:22.900563853Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.901475806Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=911.182µs policy-apex-pdp | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.905324466Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,898] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-18T23:14:22.905824925Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=497.829µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-18 23:14:29,899] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-18T23:14:22.908818236Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | [2024-04-18 23:14:29,899] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096252 grafana | logger=migrator t=2024-04-18T23:14:22.909478844Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=660.088µs policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | [2024-04-18 23:14:29,900] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-apex-pdp | [2024-04-18T23:14:56.253+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-18T23:14:22.913001476Z level=info msg="Executing migration" id="Update api_key table charset" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,900] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-apex-pdp | [2024-04-18T23:14:56.253+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e7ff64a-7d2c-4a3c-bce3-3be7547dab57, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-18T23:14:22.913136264Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=135.278µs policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-18 23:14:29,901] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-apex-pdp | [2024-04-18T23:14:56.265+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-18T23:14:22.91934124Z level=info msg="Executing migration" id="Add expires to api_key table" policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,904] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | acks = -1 grafana | logger=migrator t=2024-04-18T23:14:22.925355035Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=6.011054ms policy-pap | security.providers = null policy-db-migrator | kafka | [2024-04-18 23:14:29,905] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-18T23:14:22.92894512Z level=info msg="Executing migration" id="Add service account foreign key" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | [2024-04-18 23:14:29,909] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | batch.size = 16384 grafana | logger=migrator t=2024-04-18T23:14:22.931575161Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.626391ms policy-pap | session.timeout.ms = 45000 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql kafka | [2024-04-18 23:14:29,918] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-18T23:14:22.934841649Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- policy-apex-pdp | buffer.memory = 33554432 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-18 23:14:29,920] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-18T23:14:22.935084382Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=243.914µs policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,920] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-18T23:14:22.939305195Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-apex-pdp | client.id = producer-1 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | kafka | [2024-04-18 23:14:29,920] INFO Kafka startTimeMs: 1713482069914 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-04-18T23:14:22.941964577Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.658773ms policy-apex-pdp | compression.type = none policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | [2024-04-18 23:14:29,920] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-18T23:14:22.945417415Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql kafka | [2024-04-18 23:14:29,925] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) grafana | logger=migrator t=2024-04-18T23:14:22.948022314Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.603839ms policy-pap | ssl.key.password = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:29,929] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-18T23:14:22.951504994Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.952405646Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=900.561µs policy-apex-pdp | enable.idempotence = true kafka | [2024-04-18 23:14:29,929] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.956505031Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-apex-pdp | interceptor.classes = [] kafka | [2024-04-18 23:14:29,930] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.957150328Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=644.756µs policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-18 23:14:29,932] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.960619927Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-apex-pdp | linger.ms = 0 kafka | [2024-04-18 23:14:29,935] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-pap | ssl.keystore.password = null policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-apex-pdp | max.block.ms = 60000 kafka | [2024-04-18 23:14:29,938] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.961778803Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.156617ms policy-apex-pdp | max.in.flight.requests.per.connection = 5 kafka | [2024-04-18 23:14:29,938] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-18T23:14:22.965023979Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-apex-pdp | max.request.size = 1048576 kafka | [2024-04-18 23:14:29,946] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.965911Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=886.831µs policy-apex-pdp | metadata.max.age.ms = 300000 kafka | [2024-04-18 23:14:29,946] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.97079167Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-apex-pdp | metadata.max.idle.ms = 300000 kafka | [2024-04-18 23:14:29,947] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.971728364Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=938.063µs policy-apex-pdp | metric.reporters = [] kafka | [2024-04-18 23:14:29,947] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-pap | ssl.truststore.certificates = null policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-04-18T23:14:22.975119778Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-apex-pdp | metrics.num.samples = 2 kafka | [2024-04-18 23:14:29,949] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.975984348Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=864.229µs policy-apex-pdp | metrics.recording.level = INFO kafka | [2024-04-18 23:14:29,964] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-pap | ssl.truststore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-04-18T23:14:22.979738493Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-apex-pdp | metrics.sample.window.ms = 30000 kafka | [2024-04-18 23:14:29,994] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:22.979907312Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=170.029µs policy-apex-pdp | partitioner.adaptive.partitioning.enable = true kafka | [2024-04-18 23:14:29,998] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.983503749Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-apex-pdp | partitioner.availability.timeout.ms = 0 kafka | [2024-04-18 23:14:30,033] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:22.983593504Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=88.686µs policy-apex-pdp | partitioner.class = null kafka | [2024-04-18 23:14:34,967] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-apex-pdp | partitioner.ignore.keys = false kafka | [2024-04-18 23:14:34,967] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.988298704Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- policy-apex-pdp | receive.buffer.bytes = 32768 kafka | [2024-04-18 23:14:54,784] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:22.992683595Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.384982ms policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482092349 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | [2024-04-18 23:14:54,790] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-18T23:14:22.996361516Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-pap | [2024-04-18T23:14:52.350+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.ms = 50 kafka | [2024-04-18 23:14:54,793] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-18T23:14:22.999157186Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.79338ms policy-pap | [2024-04-18T23:14:52.805+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | policy-apex-pdp | request.timeout.ms = 30000 kafka | [2024-04-18 23:14:54,795] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-18T23:14:23.002674578Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-db-migrator | policy-pap | [2024-04-18T23:14:52.974+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-apex-pdp | retries = 2147483647 grafana | logger=migrator t=2024-04-18T23:14:23.002826497Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=151.888µs kafka | [2024-04-18 23:14:54,825] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(idJrMUf2Q6auoCOWuYUphA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Ri5cls-BQlq9q6kFJBomtA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-pap | [2024-04-18T23:14:53.238+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@cea67b1, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5d98364c, org.springframework.security.web.context.SecurityContextHolderFilter@76105ac0, org.springframework.security.web.header.HeaderWriterFilter@42805abe, org.springframework.security.web.authentication.logout.LogoutFilter@1870b9b8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2aeb7c4c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@30cb223b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@50e24ea4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@23d23d98, org.springframework.security.web.access.ExceptionTranslationFilter@20f99c18, org.springframework.security.web.access.intercept.AuthorizationFilter@4fd63c43] policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.007412345Z level=info msg="Executing migration" id="create quota table v1" kafka | [2024-04-18 23:14:54,827] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:54.119+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-apex-pdp | sasl.client.callback.handler.class = null kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.221+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-04-18T23:14:23.0082226Z level=info msg="Migration successfully executed" id="create quota table v1" duration=810.335µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.243+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-18T23:14:23.011784037Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.264+00:00|INFO|ServiceManager|main] Policy PAP starting policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-18T23:14:23.013259679Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.474532ms policy-db-migrator | kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.264+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-18T23:14:23.017223769Z level=info msg="Executing migration" id="Update quota table charset" policy-db-migrator | kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.265+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:23.017382027Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=159.329µs policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-04-18T23:14:23.021001418Z level=info msg="Executing migration" id="create plugin_setting table" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.021888787Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=886.899µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:23.026427109Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.267+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-04-18T23:14:23.027403233Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=975.885µs policy-db-migrator | kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.269+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@178ebac3 policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.030915437Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-db-migrator | kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-04-18T23:14:54.280+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.034083793Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.166346ms policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-18T23:14:23.038653946Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:54.281+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-18T23:14:23.038765493Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=111.606µs policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | allow.auto.create.topics = true kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.053348901Z level=info msg="Executing migration" id="create session table" policy-db-migrator | -------------- policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:23.05513132Z level=info msg="Migration successfully executed" id="create session table" duration=1.786869ms policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.05909823Z level=info msg="Executing migration" id="Drop old table playlist table" policy-db-migrator | policy-pap | auto.offset.reset = latest kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.059230767Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=133.498µs policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-18T23:14:23.063049859Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-db-migrator | -------------- policy-pap | check.crcs = true kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-18T23:14:23.063182656Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=134.898µs policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-18T23:14:23.066261147Z level=info msg="Executing migration" id="create playlist table v2" policy-db-migrator | -------------- policy-pap | client.id = consumer-deefd98f-1600-442c-a15a-d2ceba267151-3 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-18T23:14:23.067019169Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=757.032µs policy-db-migrator | policy-pap | client.rack = kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-18T23:14:23.069830884Z level=info msg="Executing migration" id="create playlist item table v2" policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.070636559Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=805.495µs policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.073851677Z level=info msg="Executing migration" id="Update playlist table charset" policy-db-migrator | -------------- policy-pap | enable.auto.commit = true kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:23.073884969Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=33.392µs policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | exclude.internal.topics = true policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.078384589Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-db-migrator | -------------- policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.078421851Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.593µs policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.081532393Z level=info msg="Executing migration" id="Add playlist column created_at" policy-db-migrator | policy-pap | fetch.min.bytes = 1 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-18T23:14:23.087294292Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.760249ms policy-db-migrator | > upgrade 0470-pdp.sql policy-pap | group.id = deefd98f-1600-442c-a15a-d2ceba267151 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-04-18T23:14:23.09048836Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-18T23:14:23.092954496Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.463997ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:23.097399733Z level=info msg="Executing migration" id="drop preferences table v2" policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.097462046Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=62.843µs policy-db-migrator | policy-pap | internal.leave.group.on.close = true kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-18T23:14:23.100746558Z level=info msg="Executing migration" id="drop preferences table v3" policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-18T23:14:23.100817842Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=71.634µs policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-pap | isolation.level = read_uncommitted kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-18T23:14:23.103605367Z level=info msg="Executing migration" id="create preferences table v3" policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-18T23:14:23.104543029Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=937.752µs policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.key.password = null grafana | logger=migrator t=2024-04-18T23:14:23.110182021Z level=info msg="Executing migration" id="Update preferences table charset" policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-18T23:14:23.110204922Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=22.971µs policy-db-migrator | policy-pap | max.poll.records = 500 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-18T23:14:23.113831653Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-04-18T23:14:23.119552411Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.650493ms policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-pap | metric.reporters = [] kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-04-18T23:14:23.123275567Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-18T23:14:23.123661498Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=382.031µs policy-pap | metrics.recording.level = INFO policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-18T23:14:23.127148922Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-18T23:14:23.129908005Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.755643ms policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-04-18T23:14:23.132806565Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-18T23:14:23.139735559Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=6.927784ms policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.certificates = null kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-18T23:14:23.143767503Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-apex-pdp | ssl.truststore.location = null kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.143864398Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=97.175µs policy-apex-pdp | ssl.truststore.password = null kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.15164746Z level=info msg="Executing migration" id="Add preferences index org_id" policy-apex-pdp | ssl.truststore.type = JKS kafka | [2024-04-18 23:14:54,835] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.152617573Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=975.794µs policy-apex-pdp | transaction.timeout.ms = 60000 kafka | [2024-04-18 23:14:54,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql grafana | logger=migrator t=2024-04-18T23:14:23.157222179Z level=info msg="Executing migration" id="Add preferences index user_id" policy-apex-pdp | transactional.id = null kafka | [2024-04-18 23:14:54,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.157854474Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=632.265µs policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.160820768Z level=info msg="Executing migration" id="create alert table v1" policy-apex-pdp | kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.161965401Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.144313ms policy-apex-pdp | [2024-04-18T23:14:56.273+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.165032751Z level=info msg="Executing migration" id="add index alert org_id & id " policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.165954973Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=921.431µs policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.171712482Z level=info msg="Executing migration" id="add index alert state" policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096289 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-04-18T23:14:23.172542638Z level=info msg="Migration successfully executed" id="add index alert state" duration=827.265µs policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e7ff64a-7d2c-4a3c-bce3-3be7547dab57, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.178550931Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|ServiceManager|main] service manager starting set alive kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.179648772Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.101011ms policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.18484281Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-apex-pdp | [2024-04-18T23:14:56.291+00:00|INFO|ServiceManager|main] service manager starting topic sinks kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.185543138Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=700.659µs policy-apex-pdp | [2024-04-18T23:14:56.291+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.189109896Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-04-18T23:14:23.189965053Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=855.287µs policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.193190182Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-18T23:14:23.193994327Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=803.965µs policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.199169284Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.209237362Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.067588ms policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Create REST server kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.216039929Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-apex-pdp | [2024-04-18T23:14:56.308+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-04-18T23:14:23.216551997Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=512.568µs policy-apex-pdp | [] kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.221515062Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-apex-pdp | [2024-04-18T23:14:56.311+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.222346078Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=831.136µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46ccc948-34e0-4af2-90d0-f747053a8608","timestampMs":1713482096295,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.226985745Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting Rest Server kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.227260901Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=274.936µs policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.232802828Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-04-18T23:14:23.233338388Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=533.039µs policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.236283091Z level=info msg="Executing migration" id="create alert_notification table v1" policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.237101566Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=817.825µs policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.241194533Z level=info msg="Executing migration" id="Add column is_default" policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.244687487Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.491163ms kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.249323134Z level=info msg="Executing migration" id="Add column frequency" kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.615+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-04-18T23:14:23.252768175Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.445301ms kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.615+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.256375745Z level=info msg="Executing migration" id="Add column send_reminder" kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.617+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-18T23:14:23.259855307Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.476513ms kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.263657868Z level=info msg="Executing migration" id="Add column disable_resolve_message" kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.628+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] (Re-)joining group policy-pap | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.267130861Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.480023ms kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.643+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Request joining group due to: need to re-join with the given member-id: consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.27018991Z level=info msg="Executing migration" id="add index alert_notification org_id & name" kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-04-18T23:14:23.271014726Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=824.616µs kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:56.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] (Re-)joining group policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.274949314Z level=info msg="Executing migration" id="Update alert table charset" kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:57.082+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.274979666Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.532µs kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:57.082+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.279260413Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:59.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776', protocol='range'} policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.279285854Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=26.621µs kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:59.659+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Finished assignment for group at generation 1: {consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.282678042Z level=info msg="Executing migration" id="create notification_journal table v1" kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0580-toscadatatypes.sql grafana | logger=migrator t=2024-04-18T23:14:23.283400132Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=721.75µs kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:59.667+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776', protocol='range'} policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.286844793Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:59.668+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.287773855Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=929.282µs kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-04-18T23:14:59.671+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.294410993Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-apex-pdp | [2024-04-18T23:14:59.678+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Found no committed offset for partition policy-pdp-pap-0 policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.295315763Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=911.201µs policy-apex-pdp | [2024-04-18T23:14:59.686+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | [2024-04-18 23:14:54,845] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.299691956Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql kafka | [2024-04-18 23:14:54,845] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.300376393Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=680.788µs policy-apex-pdp | [2024-04-18T23:15:16.293+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | [2024-04-18 23:14:54,845] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.302859431Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.303497346Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=638.485µs policy-apex-pdp | [2024-04-18T23:15:16.321+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.30626048Z level=info msg="Executing migration" id="Add for to alert table" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.309123988Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.863009ms policy-apex-pdp | [2024-04-18T23:15:16.323+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.313230016Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-apex-pdp | [2024-04-18T23:15:16.517+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094286 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql grafana | logger=migrator t=2024-04-18T23:14:23.31600368Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.775064ms policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.319357936Z level=info msg="Executing migration" id="Update uid column values in alert_notification" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.533+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.319496743Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=138.718µs kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.533+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@22e95960 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.321282732Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} policy-pap | [2024-04-18T23:14:54.287+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.321934478Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=652.406µs kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | [2024-04-18T23:14:54.287+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.326090419Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | allow.auto.create.topics = true policy-db-migrator | > upgrade 0610-toscanodetemplates.sql grafana | logger=migrator t=2024-04-18T23:14:23.326644519Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=554.39µs kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.328448959Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} policy-pap | auto.include.jmx.reporter = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.331052834Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.601855ms kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | auto.offset.reset = latest policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.33369005Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.551+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.333738753Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=48.683µs kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | check.crcs = true policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.338232532Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-04-18T23:15:16.552+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql grafana | logger=migrator t=2024-04-18T23:14:23.339623189Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.391397ms kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.id = consumer-policy-pap-4 policy-apex-pdp | [2024-04-18T23:15:16.585+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.342459106Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.rack = policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-18T23:14:23.343863754Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.403838ms kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | [2024-04-18T23:15:16.590+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.346734773Z level=info msg="Executing migration" id="Drop old annotation table v4" kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.346823938Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=89.325µs kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | enable.auto.commit = true policy-apex-pdp | [2024-04-18T23:15:16.598+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.350915675Z level=info msg="Executing migration" id="create annotation table v5" kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | exclude.internal.topics = true policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-04-18T23:14:23.351880408Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=965.043µs kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | [2024-04-18T23:15:16.600+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.354785629Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | [2024-04-18T23:15:16.612+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.355679089Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=892.89µs kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.min.bytes = 1 policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.358294764Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | group.id = policy-pap policy-apex-pdp | [2024-04-18T23:15:16.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.359153001Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=857.847µs kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | group.instance.id = null policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.363395987Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.624+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-04-18T23:14:23.364278245Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=881.679µs policy-pap | interceptor.classes = [] kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.372090278Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-pap | internal.leave.group.on.close = true kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:16.624+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-04-18T23:14:23.373502057Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.417029ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-18T23:15:56.162+00:00|INFO|RequestLog|qtp1863100050-32] 172.17.0.3 - policyadmin [18/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.376584528Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-pap | isolation.level = read_uncommitted kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.378682504Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.099396ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.382888537Z level=info msg="Executing migration" id="Update annotation table charset" policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql grafana | logger=migrator t=2024-04-18T23:14:23.382968841Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=81.024µs policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.385493791Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-pap | max.poll.records = 500 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-18T23:14:23.391264481Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.76973ms policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.395286254Z level=info msg="Executing migration" id="Drop category_id index" policy-pap | metric.reporters = [] kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.396146022Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=860.708µs policy-pap | metrics.num.samples = 2 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.399876239Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-pap | metrics.recording.level = INFO kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0660-toscaparameter.sql grafana | logger=migrator t=2024-04-18T23:14:23.404025659Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.14868ms policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.409728875Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.410875168Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.151644ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.415012358Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.415898887Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=886.519µs policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.420523583Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-db-migrator | policy-pap | request.timeout.ms = 30000 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.421353309Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=829.666µs policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | retry.backoff.ms = 100 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:23.424336704Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-18T23:14:23.435866894Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.526829ms kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-18T23:14:23.439178757Z level=info msg="Executing migration" id="Create annotation_tag table v3" kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-18T23:14:23.43995272Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=777.013µs kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-18T23:14:23.444232437Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:23.445281215Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.048208ms kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.44843588Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:23.449082526Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=644.976µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-18T23:14:23.453054266Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-db-migrator | kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.453676741Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=627.395µs policy-db-migrator | kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.456574261Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | > upgrade 0690-toscapolicy.sql kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-18T23:14:23.456847257Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=273.685µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-18T23:14:23.464071257Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.469057253Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.986976ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:23.472378107Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-db-migrator | kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.476357148Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.976581ms policy-db-migrator | kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.480346489Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-db-migrator | > upgrade 0700-toscapolicytype.sql kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-18T23:14:23.481213577Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=867.268µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-18T23:14:23.486999488Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-18T23:14:23.487846505Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=847.037µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-18T23:14:23.491792874Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-db-migrator | kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-18T23:14:23.492033527Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=238.694µs policy-db-migrator | kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.495607765Z level=info msg="Executing migration" id="Add epoch_end column" policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.499657799Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.047624ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:23.504616624Z level=info msg="Executing migration" id="Add index for epoch_end" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-18T23:14:23.505484502Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=867.808µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.509427071Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.509709457Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=283.215µs policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.512721804Z level=info msg="Executing migration" id="Move region to single row" policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | security.providers = null kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.513312996Z level=info msg="Migration successfully executed" id="Move region to single row" duration=591.133µs policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.516522444Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | session.timeout.ms = 45000 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.517932332Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.401008ms policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.522882747Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.523676171Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=793.704µs policy-db-migrator | policy-pap | ssl.cipher.suites = null kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.526448644Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.527294601Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=845.577µs policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.532219554Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | ssl.engine.factory.class = null kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.533713697Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.494093ms policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.537838426Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-18T23:14:23.538749066Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=915.651µs kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-18T23:14:23.546507576Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-18T23:14:23.547404666Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=896.85µs kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-18T23:14:23.550695328Z level=info msg="Executing migration" id="Increase tags column to length 4096" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-18T23:14:23.550820615Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=125.667µs policy-db-migrator | kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-18T23:14:23.554452867Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-18T23:14:23.55576987Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.316713ms policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-18T23:14:23.559837825Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-18T23:14:23.560878763Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.040558ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-18T23:14:23.563835967Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-18T23:14:23.5651581Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.321633ms policy-db-migrator | kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.568350487Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-18T23:14:23.569347992Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=994.405µs kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.574281136Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-18T23:14:23.574470206Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=189.66µs kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.579425001Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.580251257Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=825.476µs kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094291 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.583877868Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" kafka | [2024-04-18 23:14:55,013] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-04-18T23:14:23.583952302Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=75.174µs kafka | [2024-04-18 23:14:55,016] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.587048613Z level=info msg="Executing migration" id="create team table" kafka | [2024-04-18 23:14:55,019] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-18T23:14:23.587862929Z level=info msg="Migration successfully executed" id="create team table" duration=815.705µs kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-18T23:14:23.591616327Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a066a90b-9103-4d76-8165-c5999a0e1887, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-04-18T23:14:23.592641703Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.024816ms kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:14:54.310+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-04-18T23:14:23.595808529Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-pap | acks = -1 grafana | logger=migrator t=2024-04-18T23:14:23.596967433Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.154474ms kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-18T23:14:23.601042879Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-04-18T23:14:23.608797159Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.75423ms kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-18T23:14:23.612040629Z level=info msg="Executing migration" id="Update uid column values in team" kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-04-18T23:14:23.612216318Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=175.659µs kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-18T23:14:23.615107749Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | client.id = producer-1 grafana | logger=migrator t=2024-04-18T23:14:23.615997108Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=889.789µs kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | compression.type = none grafana | logger=migrator t=2024-04-18T23:14:23.61927616Z level=info msg="Executing migration" id="create team member table" kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-18T23:14:23.620513748Z level=info msg="Migration successfully executed" id="create team member table" duration=1.236978ms kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-18T23:14:23.625126424Z level=info msg="Executing migration" id="add index team_member.org_id" policy-db-migrator | kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-18T23:14:23.626569174Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.44455ms policy-db-migrator | kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-18T23:14:23.630710874Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-18T23:14:23.632557736Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.846493ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-18T23:14:23.63659858Z level=info msg="Executing migration" id="add index team_member.team_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | [2024-04-18 23:14:55,024] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-18T23:14:23.637772015Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.172875ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-18T23:14:23.642409242Z level=info msg="Executing migration" id="Add column email to team table" policy-db-migrator | kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-18T23:14:23.646296728Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.887795ms policy-db-migrator | kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-18T23:14:23.649365188Z level=info msg="Executing migration" id="Add column external to team_member table" policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-18T23:14:23.653085484Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.719446ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-18T23:14:23.656296842Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-18T23:14:23.661003393Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.709071ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-18T23:14:23.665422268Z level=info msg="Executing migration" id="create dashboard acl table" policy-db-migrator | kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:23.666434354Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.011776ms policy-db-migrator | kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-18T23:14:23.669873884Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-18T23:14:23.671036189Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.158424ms kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-18T23:14:23.675482415Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-18T23:14:23.677090364Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.608259ms kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-18T23:14:23.68223927Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-18T23:14:23.683210754Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=971.324µs kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-18T23:14:23.687361054Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:23.689086459Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.725576ms kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-04-18T23:14:23.693642312Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" kafka | [2024-04-18 23:14:55,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:23.695566989Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.930587ms kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:23.698873942Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-db-migrator | policy-pap | sasl.jaas.config = null kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.699538109Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=660.836µs policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.702133703Z level=info msg="Executing migration" id="add index dashboard_permission" policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.702820321Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=686.599µs policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-18 23:14:55,026] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.707125929Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.707638438Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=512.699µs policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.710415722Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-18T23:14:23.710667956Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=252.374µs kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-18T23:14:23.714471556Z level=info msg="Executing migration" id="create tag table" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.715215258Z level=info msg="Migration successfully executed" id="create tag table" duration=743.221µs kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:23.718225775Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-18T23:14:23.719168397Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=941.763µs kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-18T23:14:23.722282779Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:23.723001969Z level=info msg="Migration successfully executed" id="create login attempt table" duration=719.04µs kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:23.726945848Z level=info msg="Executing migration" id="add index login_attempt.username" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:23.727854948Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=911.78µs kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) grafana | logger=migrator t=2024-04-18T23:14:23.73094693Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.731907693Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=960.634µs kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.734646855Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.747832176Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.18453ms kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.75170945Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.753084757Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.375256ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) grafana | logger=migrator t=2024-04-18T23:14:23.756479335Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.758096345Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.616219ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.761208107Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.761500193Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=295.406µs policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.765091512Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.765722387Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=627.735µs policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) grafana | logger=migrator t=2024-04-18T23:14:23.768577645Z level=info msg="Executing migration" id="create user auth table" policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.769361129Z level=info msg="Migration successfully executed" id="create user auth table" duration=783.364µs policy-pap | security.providers = null kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.772344654Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.773288257Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=943.242µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.777107548Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.777177212Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=69.644µs policy-pap | ssl.cipher.suites = null kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-04-18T23:14:23.779877632Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-18 23:14:55,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.784932152Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.05367ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-18 23:14:55,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.787729407Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-pap | ssl.engine.factory.class = null kafka | [2024-04-18 23:14:55,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.792845781Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.115063ms policy-pap | ssl.key.password = null kafka | [2024-04-18 23:14:55,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.796668553Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.801725763Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.055691ms policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-04-18T23:14:23.805440909Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-pap | ssl.keystore.key = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.810467747Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.025628ms policy-pap | ssl.keystore.location = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.813520747Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | ssl.keystore.password = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.81448809Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=967.364µs policy-pap | ssl.keystore.type = JKS kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.819116317Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.82548004Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.361132ms policy-pap | ssl.provider = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-04-18T23:14:23.830326008Z level=info msg="Executing migration" id="create server_lock table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.831201107Z level=info msg="Migration successfully executed" id="create server_lock table" duration=874.909µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.834247125Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.835481684Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.234039ms policy-pap | ssl.truststore.location = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-18T23:14:23.840138432Z level=info msg="Executing migration" id="create user auth token table" policy-pap | ssl.truststore.password = null kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.841060163Z level=info msg="Migration successfully executed" id="create user auth token table" duration=921.281µs policy-pap | ssl.truststore.type = JKS policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.845800916Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.846709696Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=908.29µs policy-pap | transactional.id = null policy-db-migrator | kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.849620068Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.850538519Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=919.121µs policy-pap | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.853552156Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | [2024-04-18T23:14:54.321+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.854593593Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.037808ms policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.859547858Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094337 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.865441085Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.890216ms kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-18T23:14:54.338+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a066a90b-9103-4d76-8165-c5999a0e1887, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.871647919Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-18T23:14:54.338+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=192fff36-bd6d-4ee3-9df3-262c724178bf, alive=false, publisher=null]]: starting policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-18T23:14:23.872604552Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=956.953µs kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-18T23:14:54.339+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.877054898Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | acks = -1 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-04-18T23:14:23.878506289Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.450501ms kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.883072682Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-04-18 23:14:55,030] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.88467001Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.598838ms kafka | [2024-04-18 23:14:55,030] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.892292763Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | buffer.memory = 33554432 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.893257366Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=964.303µs kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.897389655Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | client.id = producer-2 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-18T23:14:23.899062998Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.673643ms kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | compression.type = none policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.902403003Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.90251771Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=117.856µs kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.907376759Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | enable.idempotence = true policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-18T23:14:23.907472804Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=96.635µs kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | interceptor.classes = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.915045834Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-18T23:14:23.916536587Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.490173ms kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | linger.ms = 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.921479841Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.922781483Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.301252ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.92597495Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | max.request.size = 1048576 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-04-18T23:14:23.926968545Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=993.235µs kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.930206684Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-18T23:14:23.930406196Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=206.891µs kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.934349724Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.935858438Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.508994ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.939799306Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:23.940681995Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=882.719µs kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.949757338Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.951301504Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.544566ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.957068793Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- policy-pap | partitioner.class = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:23.958615049Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.546606ms policy-db-migrator | policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-18T23:14:23.961621206Z level=info msg="Executing migration" id="Add column paused in alert_definition" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-18T23:14:23.967501332Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.879395ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-18T23:14:23.971778299Z level=info msg="Executing migration" id="drop alert_definition table" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-18T23:14:23.972812886Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.033997ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-18T23:14:23.978212345Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | request.timeout.ms = 30000 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.97848456Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=271.655µs policy-pap | retries = 2147483647 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.981711599Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | retry.backoff.ms = 100 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:23.982744937Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.032727ms policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:23.990594542Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | sasl.jaas.config = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:23.991696363Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.101101ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-18T23:14:23.999476964Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.001360068Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.881904ms policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.004748266Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.005093755Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=344.399µs policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-04-18T23:14:24.008952309Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.009922652Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=969.763µs policy-pap | sasl.login.class = null kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:24.01439396Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-18T23:14:24.015383154Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=988.414µs kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-18T23:14:24.0185582Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-18T23:14:24.019837721Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.279491ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-18T23:14:24.024260005Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-18T23:14:24.026562033Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=2.303588ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:24.029992873Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:24.036082529Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.088497ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-18T23:14:24.045184133Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-18T23:14:24.046466034Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.285791ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-18T23:14:24.05344973Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-18T23:14:24.05453675Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.093461ms kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-18T23:14:24.059372228Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" kafka | [2024-04-18 23:14:55,070] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:24.081516822Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=22.143305ms kafka | [2024-04-18 23:14:55,070] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-18T23:14:24.085695343Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-04-18 23:14:55,124] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:24.112348978Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.646604ms kafka | [2024-04-18 23:14:55,137] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-18T23:14:24.151754497Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,138] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-18T23:14:24.153547196Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.793649ms policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-18 23:14:55,142] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-18T23:14:24.163748041Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,143] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-18T23:14:24.165400492Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.653122ms policy-db-migrator | kafka | [2024-04-18 23:14:55,157] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | security.providers = null grafana | logger=migrator t=2024-04-18T23:14:24.17187275Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-db-migrator | kafka | [2024-04-18 23:14:55,158] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-18T23:14:24.177553734Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.680794ms kafka | [2024-04-18 23:14:55,158] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-18T23:14:24.180935581Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-04-18 23:14:55,158] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-18T23:14:24.186494019Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.557897ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,158] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-18T23:14:24.19265887Z level=info msg="Executing migration" id="create alert_rule table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-18 23:14:55,166] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,166] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.194238887Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.583938ms policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | [2024-04-18 23:14:55,166] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.202921847Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | ssl.engine.factory.class = null policy-db-migrator | kafka | [2024-04-18 23:14:55,166] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.206976652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=4.053994ms policy-pap | ssl.key.password = null policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-04-18 23:14:55,166] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.211985599Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-18 23:14:55,173] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.213124272Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.139783ms policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,174] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.220718522Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | ssl.keystore.key = null policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-04-18 23:14:55,174] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.222100078Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.383207ms policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,174] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.231915001Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-04-18 23:14:55,174] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.232031937Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=121.036µs policy-pap | ssl.keystore.type = JKS policy-db-migrator | kafka | [2024-04-18 23:14:55,180] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.238505735Z level=info msg="Executing migration" id="add column for to alert_rule" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-04-18 23:14:55,191] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.245308822Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.799186ms policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,191] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.25305258Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-04-18 23:14:55,191] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.259807594Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.749713ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-18 23:14:55,191] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.270284513Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null kafka | [2024-04-18 23:14:55,201] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.276760341Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.466148ms policy-db-migrator | policy-pap | ssl.truststore.location = null kafka | [2024-04-18 23:14:55,201] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.280472227Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-db-migrator | policy-pap | ssl.truststore.password = null kafka | [2024-04-18 23:14:55,201] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.281905686Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.43873ms policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | ssl.truststore.type = JKS kafka | [2024-04-18 23:14:55,201] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.286265937Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-db-migrator | -------------- policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-18 23:14:55,201] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.287299594Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.033257ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | transactional.id = null kafka | [2024-04-18 23:14:55,214] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.290882612Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-db-migrator | -------------- policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-18 23:14:55,217] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.298297943Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.41475ms policy-db-migrator | policy-pap | kafka | [2024-04-18 23:14:55,218] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.306427212Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-db-migrator | policy-pap | [2024-04-18T23:14:54.339+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-04-18 23:14:55,219] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.312729561Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.301579ms policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-18 23:14:55,219] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-18T23:14:24.316717321Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-04-18 23:14:55,227] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094342 grafana | logger=migrator t=2024-04-18T23:14:24.317831753Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.113422ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,228] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=192fff36-bd6d-4ee3-9df3-262c724178bf, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-18T23:14:24.32138884Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | kafka | [2024-04-18 23:14:55,228] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator grafana | logger=migrator t=2024-04-18T23:14:24.325778632Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.389402ms policy-db-migrator | kafka | [2024-04-18 23:14:55,228] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher grafana | logger=migrator t=2024-04-18T23:14:24.329796265Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql kafka | [2024-04-18 23:14:55,228] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:54.344+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher grafana | logger=migrator t=2024-04-18T23:14:24.33405273Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.255745ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,237] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:54.344+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers grafana | logger=migrator t=2024-04-18T23:14:24.338720048Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num kafka | [2024-04-18 23:14:55,238] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:54.347+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers grafana | logger=migrator t=2024-04-18T23:14:24.338791142Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=65.774µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,238] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.348+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock grafana | logger=migrator t=2024-04-18T23:14:24.342983164Z level=info msg="Executing migration" id="create alert_rule_version table" policy-db-migrator | kafka | [2024-04-18 23:14:55,238] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.348+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-04-18T23:14:24.344114257Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.131102ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,239] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:54.349+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer grafana | logger=migrator t=2024-04-18T23:14:24.348750983Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-04-18 23:14:55,245] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:54.351+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-04-18T23:14:24.349866625Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.115262ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,245] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:54.351+00:00|INFO|ServiceManager|main] Policy PAP started grafana | logger=migrator t=2024-04-18T23:14:24.353864936Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | kafka | [2024-04-18 23:14:55,245] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.353+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.125 seconds (process running for 10.732) grafana | logger=migrator t=2024-04-18T23:14:24.354969237Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.102991ms policy-db-migrator | kafka | [2024-04-18 23:14:55,245] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.357+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-04-18T23:14:24.359299677Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-04-18 23:14:55,245] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ grafana | logger=migrator t=2024-04-18T23:14:24.35936664Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=67.394µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,254] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ grafana | logger=migrator t=2024-04-18T23:14:24.362632401Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-04-18 23:14:55,255] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:54.761+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-04-18T23:14:24.369005053Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.370582ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,256] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ grafana | logger=migrator t=2024-04-18T23:14:24.373170884Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | kafka | [2024-04-18 23:14:55,256] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.825+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.380244575Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.073231ms policy-db-migrator | kafka | [2024-04-18 23:14:55,257] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:54.826+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ grafana | logger=migrator t=2024-04-18T23:14:24.385922969Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-04-18 23:14:55,264] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:54.883+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 grafana | logger=migrator t=2024-04-18T23:14:24.3920841Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.160891ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,265] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:54.883+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 grafana | logger=migrator t=2024-04-18T23:14:24.397387313Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-04-18 23:14:55,265] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.889+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.402334157Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.951334ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,265] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:54.976+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.405591897Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | kafka | [2024-04-18 23:14:55,265] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:54.996+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.411341295Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.747768ms policy-db-migrator | kafka | [2024-04-18 23:14:55,274] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:55.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.422441079Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-04-18 23:14:55,275] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:55.102+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.422577076Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=147.418µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,275] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:55.195+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.427596784Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-04-18 23:14:55,276] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:55.210+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.428478433Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=882.259µs policy-db-migrator | JOIN pdpstatistics b kafka | [2024-04-18 23:14:55,276] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:55.301+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.433603676Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-04-18 23:14:55,284] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:55.325+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.439081929Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.478533ms policy-db-migrator | SET a.id = b.id kafka | [2024-04-18 23:14:55,284] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:55.419+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.443412309Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,284] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:55.433+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.443463032Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=51.283µs policy-db-migrator | kafka | [2024-04-18 23:14:55,285] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:55.526+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.449931229Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | kafka | [2024-04-18 23:14:55,285] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:55.538+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.46061323Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.684411ms policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-04-18 23:14:55,291] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:55.633+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-04-18T23:14:24.466344067Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:55.652+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-18 23:14:55,292] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.467081998Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=738.351µs policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-pap | [2024-04-18T23:14:55.746+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-18 23:14:55,292] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.47037044Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:55.756+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-18 23:14:55,292] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.476583413Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.212213ms policy-db-migrator | policy-pap | [2024-04-18T23:14:55.764+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-18 23:14:55,293] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.479683135Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | policy-pap | [2024-04-18T23:14:55.771+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-18 23:14:55,302] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.480448817Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=763.192µs policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b kafka | [2024-04-18 23:14:55,303] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.486593217Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-18 23:14:55,303] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.488021296Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.433669ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-18 23:14:55,303] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.492284082Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-18 23:14:55,303] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.499342292Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.05777ms policy-db-migrator | policy-pap | [2024-04-18T23:14:55.857+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] (Re-)joining group kafka | [2024-04-18 23:14:55,310] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.506728511Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Request joining group due to: need to re-join with the given member-id: consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 kafka | [2024-04-18 23:14:55,311] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.507498623Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=770.492µs policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-18 23:14:55,311] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.519529959Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] (Re-)joining group kafka | [2024-04-18 23:14:55,311] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=migrator t=2024-04-18T23:14:24.520965748Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.436679ms policy-pap | [2024-04-18T23:14:58.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.52769149Z level=info msg="Executing migration" id="create alert_image table" kafka | [2024-04-18 23:14:55,311] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:14:58.833+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.528920068Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.234808ms kafka | [2024-04-18 23:14:55,319] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:14:58.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Successfully joined group with generation Generation{generationId=1, memberId='consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.532595702Z level=info msg="Executing migration" id="add unique index on token to alert_image table" kafka | [2024-04-18 23:14:55,320] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:14:58.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Finished assignment for group at generation 1: {consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-04-18T23:14:24.533907104Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.312543ms kafka | [2024-04-18 23:14:55,320] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:58.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.539022957Z level=info msg="Executing migration" id="support longer URLs in alert_image table" kafka | [2024-04-18 23:14:55,320] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:14:58.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-18T23:14:24.539153054Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=135.507µs kafka | [2024-04-18 23:14:55,320] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:58.883+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Successfully synced group in generation Generation{generationId=1, memberId='consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3', protocol='range'} grafana | logger=migrator t=2024-04-18T23:14:24.54251389Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-04-18 23:14:55,332] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-04-18T23:14:58.883+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-04-18T23:14:24.543648483Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.134503ms kafka | [2024-04-18 23:14:55,333] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-04-18T23:14:58.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-04-18T23:14:24.547542138Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-04-18 23:14:55,333] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | [2024-04-18T23:14:58.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-04-18T23:14:24.54847636Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=935.542µs kafka | [2024-04-18 23:14:55,333] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:58.912+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-04-18T23:14:24.556498584Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-18 23:14:55,333] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | [2024-04-18T23:14:58.912+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-04-18T23:14:24.556782299Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-18 23:14:55,339] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:14:58.936+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-04-18T23:14:24.563803458Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-04-18 23:14:55,340] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-04-18T23:14:58.937+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-04-18T23:14:24.564128956Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=323.117µs kafka | [2024-04-18 23:14:55,340] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:00.553+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-18T23:14:24.568236753Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-04-18 23:14:55,340] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | [2024-04-18T23:15:00.553+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-18T23:14:24.568967493Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=730.71µs kafka | [2024-04-18 23:14:55,340] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:00.554+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms grafana | logger=migrator t=2024-04-18T23:14:24.57144348Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-04-18 23:14:55,348] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-pap | [2024-04-18T23:15:16.330+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-18T23:14:24.576900882Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.456512ms kafka | [2024-04-18 23:14:55,349] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [] grafana | logger=migrator t=2024-04-18T23:14:24.581085584Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-04-18 23:14:55,349] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.330+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-18T23:14:24.582158553Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.07306ms kafka | [2024-04-18 23:14:55,349] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-18T23:14:24.585135047Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-04-18 23:14:55,349] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-pap | [2024-04-18T23:15:16.331+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-18T23:14:24.586183985Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.051018ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,357] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-18T23:14:24.589072345Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-04-18 23:14:55,358] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.358+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-04-18T23:14:24.589913262Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=840.517µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,358] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting grafana | logger=migrator t=2024-04-18T23:14:24.593885361Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | kafka | [2024-04-18 23:14:55,358] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting listener grafana | logger=migrator t=2024-04-18T23:14:24.595009344Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.122702ms policy-db-migrator | kafka | [2024-04-18 23:14:55,358] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting timer grafana | logger=migrator t=2024-04-18T23:14:24.600631495Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-04-18 23:14:55,370] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.468+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] grafana | logger=migrator t=2024-04-18T23:14:24.602280296Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.648632ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,371] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting enqueue grafana | logger=migrator t=2024-04-18T23:14:24.606323649Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-04-18 23:14:55,371] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] grafana | logger=migrator t=2024-04-18T23:14:24.606386283Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=64.304µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,371] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate started grafana | logger=migrator t=2024-04-18T23:14:24.609175837Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | kafka | [2024-04-18 23:14:55,371] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:16.473+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-18T23:14:24.609239331Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=63.534µs policy-db-migrator | kafka | [2024-04-18 23:14:55,379] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-18T23:14:24.61193452Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql kafka | [2024-04-18 23:14:55,380] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.518+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-18T23:14:24.612221276Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=286.575µs policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,380] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-18T23:14:24.615058403Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB kafka | [2024-04-18 23:14:55,380] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-18T23:14:24.616265259Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.205667ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,380] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-18T23:14:24.619964594Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | kafka | [2024-04-18 23:14:55,386] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-18T23:14:24.621956554Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.99093ms policy-db-migrator | kafka | [2024-04-18 23:14:55,387] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-18T23:14:24.625069096Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-04-18 23:14:55,387] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.545+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-18T23:14:24.654048269Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.978233ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,387] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-18T23:14:24.659730503Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-04-18 23:14:55,387] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:16.545+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-04-18T23:14:24.664753251Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.022768ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,393] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.548+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-18T23:14:24.668734031Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | kafka | [2024-04-18 23:14:55,394] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-18T23:14:24.668900331Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=166.809µs policy-db-migrator | kafka | [2024-04-18 23:14:55,394] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.553+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-18T23:14:24.671854354Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-18T23:14:24.702478678Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=30.624104ms kafka | [2024-04-18 23:14:55,394] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping grafana | logger=migrator t=2024-04-18T23:14:24.705562758Z level=info msg="Executing migration" id="rename data_keys id column back to name" kafka | [2024-04-18 23:14:55,394] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-04-18 23:14:55,402] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-04-18T23:14:24.735506655Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.942396ms policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,403] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping timer grafana | logger=migrator t=2024-04-18T23:14:24.739822553Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | kafka | [2024-04-18 23:14:55,403] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.740443968Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=620.875µs policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] kafka | [2024-04-18 23:14:55,403] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.745476216Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-04-18T23:15:16.566+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping listener kafka | [2024-04-18 23:14:55,403] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.747097826Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.621239ms policy-pap | [2024-04-18T23:15:16.566+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopped kafka | [2024-04-18 23:14:55,416] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.754175007Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate successful kafka | [2024-04-18 23:14:55,425] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.754480214Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=305.797µs kafka | [2024-04-18 23:14:55,426] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 start publishing next request grafana | logger=migrator t=2024-04-18T23:14:24.758274784Z level=info msg="Executing migration" id="create permission table" kafka | [2024-04-18 23:14:55,426] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting grafana | logger=migrator t=2024-04-18T23:14:24.759740475Z level=info msg="Migration successfully executed" id="create permission table" duration=1.465011ms policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting listener kafka | [2024-04-18 23:14:55,427] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting timer grafana | logger=migrator t=2024-04-18T23:14:24.765718806Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-db-migrator | kafka | [2024-04-18 23:14:55,442] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] grafana | logger=migrator t=2024-04-18T23:14:24.766688319Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=969.464µs policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-04-18 23:14:55,442] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting enqueue grafana | logger=migrator t=2024-04-18T23:14:24.774693562Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,443] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange started grafana | logger=migrator t=2024-04-18T23:14:24.77628309Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.589568ms policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-pap | [2024-04-18T23:15:16.569+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] policy-db-migrator | -------------- kafka | [2024-04-18 23:14:55,443] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.779642976Z level=info msg="Executing migration" id="create role table" policy-pap | [2024-04-18T23:15:16.569+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,443] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.780467241Z level=info msg="Migration successfully executed" id="create role table" duration=824.255µs policy-db-migrator | policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,456] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.785829788Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.575+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-18 23:14:55,457] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.795771538Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.94168ms policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,457] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.798905921Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.579+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c25a74a6-c62e-4253-9df7-3b45bb0657f1 kafka | [2024-04-18 23:14:55,457] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.804685671Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.77834ms policy-db-migrator | policy-pap | [2024-04-18T23:15:16.584+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-18 23:14:55,457] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.810211856Z level=info msg="Executing migration" id="add index role.org_id" policy-db-migrator | policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,471] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.811168919Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=956.843µs policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | [2024-04-18T23:15:16.584+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-18 23:14:55,472] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.816628941Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.592+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,472] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.817817427Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.190186ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,472] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.821048056Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.592+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-18 23:14:55,472] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.597+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-18 23:14:55,483] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.822706478Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.654601ms policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,484] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.827534145Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-pap | [2024-04-18T23:15:16.598+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 35f314d5-4e39-45ee-b5dd-8c7ab9415862 kafka | [2024-04-18 23:14:55,484] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.828780404Z level=info msg="Migration successfully executed" id="create team role table" duration=1.246429ms policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.600+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,484] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.836343282Z level=info msg="Executing migration" id="add index team_role.org_id" policy-db-migrator | policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,485] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.837507696Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.169055ms policy-db-migrator | policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping kafka | [2024-04-18 23:14:55,495] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:24.841832175Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-04-18T23:14:24.842712284Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=880.469µs policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping enqueue kafka | [2024-04-18 23:14:55,496] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:24.849392784Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping timer kafka | [2024-04-18 23:14:55,496] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.850211959Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=819.696µs policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] kafka | [2024-04-18 23:14:55,497] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:24.857406607Z level=info msg="Executing migration" id="create user role table" policy-db-migrator | kafka | [2024-04-18 23:14:55,497] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:24.858777613Z level=info msg="Migration successfully executed" id="create user role table" duration=1.371705ms policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping listener kafka | [2024-04-18 23:14:55,506] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopped policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.862439255Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-04-18 23:14:55,507] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-04-18T23:14:24.864028253Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.588738ms policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange successful kafka | [2024-04-18 23:14:55,507] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.867687505Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 start publishing next request kafka | [2024-04-18 23:14:55,507] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-04-18T23:14:24.869306705Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.61794ms policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting kafka | [2024-04-18 23:14:55,507] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.876296082Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting listener kafka | [2024-04-18 23:14:55,515] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.877379882Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.08465ms policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting timer kafka | [2024-04-18 23:14:55,516] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | msg grafana | logger=migrator t=2024-04-18T23:14:24.880721906Z level=info msg="Executing migration" id="create builtin role table" policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=46108e0e-87ab-4235-a93e-b62b8e791b82, expireMs=1713482146601] kafka | [2024-04-18 23:14:55,516] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-04-18T23:14:24.882077191Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.355205ms policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting enqueue kafka | [2024-04-18 23:14:55,516] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.885822939Z level=info msg="Executing migration" id="add index builtin_role.role_id" kafka | [2024-04-18 23:14:55,517] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(idJrMUf2Q6auoCOWuYUphA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate started policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-04-18T23:14:24.887599917Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.777399ms policy-pap | [2024-04-18T23:15:16.602+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,527] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.895101302Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,528] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-04-18T23:14:24.897131934Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.034963ms policy-pap | [2024-04-18T23:15:16.610+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-18 23:14:55,529] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.901709597Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,529] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.910708455Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.997808ms policy-pap | [2024-04-18T23:15:16.610+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-18 23:14:55,530] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:24.9154958Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-pap | [2024-04-18T23:15:16.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,540] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-04-18T23:14:24.91802477Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.528709ms policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,542] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.922667326Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-pap | [2024-04-18T23:15:16.613+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-18 23:14:55,542] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-18T23:14:24.924693178Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.025532ms policy-pap | [2024-04-18T23:15:16.623+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-18 23:14:55,542] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:24.928173121Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-18 23:14:55,542] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.623+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping grafana | logger=migrator t=2024-04-18T23:14:24.929301353Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.130882ms kafka | [2024-04-18 23:14:55,551] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-04-18T23:14:24.93484351Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-04-18 23:14:55,552] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping timer grafana | logger=migrator t=2024-04-18T23:14:24.936240777Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.395957ms kafka | [2024-04-18 23:14:55,552] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=46108e0e-87ab-4235-a93e-b62b8e791b82, expireMs=1713482146601] grafana | logger=migrator t=2024-04-18T23:14:24.942246229Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-04-18 23:14:55,552] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping listener grafana | logger=migrator t=2024-04-18T23:14:24.943559392Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.309933ms kafka | [2024-04-18 23:14:55,553] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopped grafana | logger=migrator t=2024-04-18T23:14:24.947156511Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-04-18 23:14:55,570] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-pap | [2024-04-18T23:15:16.628+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-18T23:14:24.948292674Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.136093ms kafka | [2024-04-18 23:14:55,571] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-18T23:14:24.952427092Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-04-18 23:14:55,571] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 46108e0e-87ab-4235-a93e-b62b8e791b82 grafana | logger=migrator t=2024-04-18T23:14:24.96069712Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.269928ms kafka | [2024-04-18 23:14:55,571] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate successful grafana | logger=migrator t=2024-04-18T23:14:24.964691761Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-04-18 23:14:55,571] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 has no more requests grafana | logger=migrator t=2024-04-18T23:14:24.97117804Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.484968ms kafka | [2024-04-18 23:14:55,580] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:21.003+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. grafana | logger=migrator t=2024-04-18T23:14:24.978107763Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-04-18 23:14:55,581] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-pap | [2024-04-18T23:15:21.055+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=migrator t=2024-04-18T23:14:24.990850398Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=12.743335ms kafka | [2024-04-18 23:14:55,581] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:21.066+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=migrator t=2024-04-18T23:14:24.994135869Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-04-18 23:14:55,581] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:21.068+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=migrator t=2024-04-18T23:14:24.999675476Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.539307ms kafka | [2024-04-18 23:14:55,581] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:21.512+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.002755026Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-04-18 23:14:55,591] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-pap | [2024-04-18T23:15:22.000+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.003880108Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.124372ms kafka | [2024-04-18 23:14:55,591] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:22.001+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.007986634Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-04-18 23:14:55,591] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-04-18T23:15:22.529+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.00918747Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.200206ms kafka | [2024-04-18 23:14:55,591] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:22.765+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 grafana | logger=migrator t=2024-04-18T23:14:25.012523843Z level=info msg="Executing migration" id="remove permission role_id action scope index" kafka | [2024-04-18 23:14:55,591] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-18T23:15:22.865+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-04-18T23:14:25.013678936Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.152883ms kafka | [2024-04-18 23:14:55,599] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:22.866+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.022198293Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-18 23:14:55,600] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | [2024-04-18T23:15:22.866+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.02322895Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.030527ms kafka | [2024-04-18 23:14:55,600] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:22.880+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-18T23:15:22Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-18T23:15:22Z, user=policyadmin)] grafana | logger=migrator t=2024-04-18T23:14:25.031090431Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-18 23:14:55,600] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:23.553+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.033051608Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.960517ms kafka | [2024-04-18 23:14:55,600] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-04-18T23:14:25.036458655Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-04-18 23:14:55,608] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.036643375Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=187.34µs kafka | [2024-04-18 23:14:55,609] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.041885973Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-04-18 23:14:55,609] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.041959567Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.564µs kafka | [2024-04-18 23:14:55,609] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:23.555+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-04-18T23:14:25.046100744Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-04-18 23:14:55,610] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:23.565+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-18T23:15:23Z, user=policyadmin)] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.046686326Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=584.762µs kafka | [2024-04-18 23:14:55,616] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-18T23:14:25.050745299Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-04-18 23:14:55,616] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.051465658Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=721.59µs kafka | [2024-04-18 23:14:55,616] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.054711396Z level=info msg="Executing migration" id="dashboard permissions uid scopes" kafka | [2024-04-18 23:14:55,616] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.055410434Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=698.938µs kafka | [2024-04-18 23:14:55,616] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-04-18T23:14:25.060011157Z level=info msg="Executing migration" id="drop managed folder create actions" kafka | [2024-04-18 23:14:55,625] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.060296112Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=285.036µs kafka | [2024-04-18 23:14:55,626] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-04-18T23:15:23.895+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-18T23:15:23Z, user=policyadmin)] grafana | logger=migrator t=2024-04-18T23:14:25.063613624Z level=info msg="Executing migration" id="alerting notification permissions" kafka | [2024-04-18 23:14:55,626] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-04-18T23:15:44.463+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.064199166Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=585.692µs kafka | [2024-04-18 23:14:55,627] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-04-18T23:15:44.465+00:00|INFO|SessionData|http-nio-6969-exec-3] deleting DB group testGroup grafana | logger=migrator t=2024-04-18T23:14:25.073090184Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-04-18 23:14:55,627] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-18T23:15:46.468+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] grafana | logger=migrator t=2024-04-18T23:14:25.073944801Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=855.477µs kafka | [2024-04-18 23:14:55,636] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | [2024-04-18T23:15:46.568+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] grafana | logger=migrator t=2024-04-18T23:14:25.082918023Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-04-18 23:14:55,641] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.084135439Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.217036ms kafka | [2024-04-18 23:14:55,641] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.088731211Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-04-18 23:14:55,641] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.094639785Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.908954ms kafka | [2024-04-18 23:14:55,641] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-04-18T23:14:25.097985159Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-04-18 23:14:55,654] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.098088044Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=103.195µs kafka | [2024-04-18 23:14:55,655] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-04-18T23:14:25.101447209Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-04-18 23:14:55,655] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-18T23:14:25.102552389Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.10441ms kafka | [2024-04-18 23:14:55,655] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-18T23:14:25.109295349Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-04-18 23:14:55,656] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-04-18T23:14:25.110628572Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.332963ms kafka | [2024-04-18 23:14:55,666] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | name version grafana | logger=migrator t=2024-04-18T23:14:25.113772445Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-04-18 23:14:55,667] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policyadmin 1300 kafka | [2024-04-18 23:14:55,667] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.115048925Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.27612ms policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-04-18 23:14:55,667] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.11861712Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 kafka | [2024-04-18 23:14:55,667] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.127789233Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.171613ms policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 kafka | [2024-04-18 23:14:55,681] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:25.132217726Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 kafka | [2024-04-18 23:14:55,681] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:25.133445003Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.226927ms policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 kafka | [2024-04-18 23:14:55,681] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.143629202Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,681] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.145753858Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.129867ms policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,681] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.151768508Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,693] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:25.179302198Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=27.532039ms policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,694] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:25.183800084Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,694] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.184740826Z level=info msg="Migration successfully executed" id="create correlation v2" duration=939.592µs policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,694] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.193160218Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,694] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.194437718Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.277281ms policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,703] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-18T23:14:25.197791652Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,704] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-18T23:14:25.199825663Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.033632ms policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,704] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.20470113Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,704] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-18T23:14:25.205943329Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.240578ms policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,704] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.209471482Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.209862193Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=390.131µs policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.213318503Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.214227193Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=908.39µs policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.218516168Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.226988403Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.469244ms policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.234466823Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.23570018Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.228438ms policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.239375242Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.240537445Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.161343ms policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.24499915Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.2455373Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.248808819Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.249397081Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.253733249Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.255220551Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.488581ms policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.259916758Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.261293124Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.375825ms policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.264922143Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.266322849Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.400367ms policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.274080815Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.275358925Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.27787ms policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.282460924Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.284378199Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.913935ms kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 grafana | logger=migrator t=2024-04-18T23:14:25.288238731Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 grafana | logger=migrator t=2024-04-18T23:14:25.289387844Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.149033ms policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.294097332Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.295381533Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.28181ms policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.299515879Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.301620925Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.106226ms policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.305116577Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.306441419Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.322873ms policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.310785727Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.312503822Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.715474ms policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.317099314Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.319100683Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.00116ms policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.325010977Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.350270292Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.260675ms policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.353725952Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.362125343Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.39744ms policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.367603013Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.374218486Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.613452ms policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.377706657Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.378032225Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=324.798µs policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.383529216Z level=info msg="Executing migration" id="add share column" policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.395368944Z level=info msg="Migration successfully executed" id="add share column" duration=11.839218ms policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.404707376Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-04-18T23:14:25.405009763Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=301.987µs policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-18 23:14:55,717] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.411398143Z level=info msg="Executing migration" id="create file table" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,723] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.41260994Z level=info msg="Migration successfully executed" id="create file table" duration=1.209126ms policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,724] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.417233493Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.41881302Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.576217ms policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.423678517Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.426135111Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.456155ms policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.429921719Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.431163987Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.240968ms policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.435444152Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.436975746Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.531444ms policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.441956589Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.442005322Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=49.352µs policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.445035858Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.445175485Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=141.287µs kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.450268695Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.450773062Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=504.247µs kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.458708848Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.458958431Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=249.784µs kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.461346202Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.46240823Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.061828ms kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.465837848Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.472134084Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.295746ms kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.475384742Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.47552593Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=140.628µs kafka | [2024-04-18 23:14:55,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.480392867Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.481422403Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.028947ms kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.484614208Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.485035191Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=421.013µs kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.490922434Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 grafana | logger=migrator t=2024-04-18T23:14:25.491174648Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=251.904µs kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.496953615Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.497404529Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=450.735µs kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.501342045Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.508288076Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.944421ms kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.511700663Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.52111704Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.415686ms kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.524922868Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.526123454Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.200516ms policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.532917507Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.60756819Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.649474ms policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.611385729Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.612253537Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=867.648µs policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.618831578Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.620673169Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.841381ms policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.629069799Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.657234303Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.165524ms policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.661825745Z level=info msg="Executing migration" id="add origin column to seed_assignment" policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.670756655Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.93026ms policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.675645023Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.67595934Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=314.107µs policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 grafana | logger=migrator t=2024-04-18T23:14:25.679293293Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.679504474Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=210.791µs policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.683196047Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1804242314231100u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.683741827Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=545.169µs policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.688998195Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.689345604Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=347.139µs policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.695251708Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.695679871Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=428.683µs policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 2 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.699320251Z level=info msg="Executing migration" id="create folder table" policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.700131565Z level=info msg="Migration successfully executed" id="create folder table" duration=811.614µs policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.705373713Z level=info msg="Executing migration" id="Add index for parent_uid" policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.70695759Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.583367ms kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.715316218Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.717956793Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.640455ms kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.721909Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.721944852Z level=info msg="Migration successfully executed" id="Update folder title length" duration=36.672µs kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.725776702Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.727170848Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.394216ms kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.732659559Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.734066366Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.409367ms kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.737506475Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.738965935Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.46274ms kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 3 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.744686809Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.745152724Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=465.186µs kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.753141312Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.753501012Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=359.71µs kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.758035611Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.759562634Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.526344ms kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.763391184Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.764759259Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.369555ms kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.770226159Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.771623746Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.397766ms kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.775237034Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.776608059Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.367315ms kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.780194836Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.781404972Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.211077ms kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.785774032Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.786786187Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.012676ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.79504665Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.797353967Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.307516ms grafana | logger=migrator t=2024-04-18T23:14:25.803058969Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.80435535Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.296681ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.809414078Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.810989724Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.574766ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.814471905Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.81584201Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.372175ms kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.819837069Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.821268368Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.432169ms kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.825800996Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.826276332Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=476.136µs kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.829633817Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.839161719Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.528833ms kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.84301025Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.843997754Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=988.534µs kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.848424517Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.850285469Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.860302ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.853809722Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.855431971Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.622319ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.859017848Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.860621426Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.619739ms kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.865560206Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.86727281Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.711894ms kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.870992064Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.872336658Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.345324ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.879146421Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.880374989Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.228258ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.886661184Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.888511945Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.852042ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.892294432Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.89279372Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=499.468µs kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.896825851Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.897083775Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=254.464µs kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.90283548Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-18T23:14:25.912522472Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.684131ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.91669558Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.926622655Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.927025ms kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.935545224Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 3 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.935973177Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=428.803µs kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-18T23:14:25.941340212Z level=info msg="migrations completed" performed=548 skipped=0 duration=3.990618926s kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=sqlstore t=2024-04-18T23:14:25.952230399Z level=info msg="Created default admin" user=admin kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=sqlstore t=2024-04-18T23:14:25.952685934Z level=info msg="Created default organization" kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=secrets t=2024-04-18T23:14:25.957238644Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=plugin.store t=2024-04-18T23:14:25.978285378Z level=info msg="Loading plugins..." kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=local.finder t=2024-04-18T23:14:26.02190983Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=plugin.store t=2024-04-18T23:14:26.021949882Z level=info msg="Plugins loaded" count=55 duration=43.665305ms kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=query_data t=2024-04-18T23:14:26.030346346Z level=info msg="Query Service initialization" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 2 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=live.push_http t=2024-04-18T23:14:26.03673946Z level=info msg="Live Push Gateway initialization" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration t=2024-04-18T23:14:26.04180481Z level=info msg=Starting kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration t=2024-04-18T23:14:26.042329549Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration orgID=1 t=2024-04-18T23:14:26.043035158Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration orgID=1 t=2024-04-18T23:14:26.04414885Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration t=2024-04-18T23:14:26.046184532Z level=info msg="Completed alerting migration" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.081512146Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=infra.usagestats.collector t=2024-04-18T23:14:26.083661365Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=provisioning.datasources t=2024-04-18T23:14:26.08574102Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=provisioning.alerting t=2024-04-18T23:14:26.098809943Z level=info msg="starting to provision alerting" kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=provisioning.alerting t=2024-04-18T23:14:26.098832764Z level=info msg="finished to provision alerting" kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.099004034Z level=info msg="Warming state cache for startup" kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.multiorg.alertmanager t=2024-04-18T23:14:26.099060967Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.099456509Z level=info msg="State cache has been initialized" states=0 duration=451.305µs kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.scheduler t=2024-04-18T23:14:26.099492841Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ticker t=2024-04-18T23:14:26.099542244Z level=info msg=starting first_tick=2024-04-18T23:14:30Z kafka | [2024-04-18 23:14:55,737] INFO [Broker id=1] Finished LeaderAndIsr request in 716ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=grafanaStorageLogger t=2024-04-18T23:14:26.10039072Z level=info msg="Storage starting" grafana | logger=http.server t=2024-04-18T23:14:26.104061213Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-04-18 23:14:55,742] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Ri5cls-BQlq9q6kFJBomtA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=idJrMUf2Q6auoCOWuYUphA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=provisioning.dashboard t=2024-04-18T23:14:26.139603269Z level=info msg="starting to provision dashboards" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=plugins.update.checker t=2024-04-18T23:14:26.199727525Z level=info msg="Update check succeeded" duration=99.179745ms kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=grafana.update.checker t=2024-04-18T23:14:26.200835026Z level=info msg="Update check succeeded" duration=101.215058ms kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=sqlstore.transactions t=2024-04-18T23:14:26.220101322Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=sqlstore.transactions t=2024-04-18T23:14:26.23092728Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=grafana-apiserver t=2024-04-18T23:14:26.367095582Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=grafana-apiserver t=2024-04-18T23:14:26.367652163Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=provisioning.dashboard t=2024-04-18T23:14:26.414354276Z level=info msg="finished to provision dashboards" kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=infra.usagestats t=2024-04-18T23:15:59.111138484Z level=info msg="Usage stats are ready to report" kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,757] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-18 23:14:55,758] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-18 23:14:55,795] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:55,811] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:55,861] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group deefd98f-1600-442c-a15a-d2ceba267151 in Empty state. Created a new member id consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:55,866] INFO [GroupCoordinator 1]: Preparing to rebalance group deefd98f-1600-442c-a15a-d2ceba267151 in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:56,642] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 in Empty state. Created a new member id consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:56,645] INFO [GroupCoordinator 1]: Preparing to rebalance group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:58,824] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:58,847] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:58,870] INFO [GroupCoordinator 1]: Stabilized group deefd98f-1600-442c-a15a-d2ceba267151 generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:58,879] INFO [GroupCoordinator 1]: Assignment received from leader consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 for group deefd98f-1600-442c-a15a-d2ceba267151 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:59,647] INFO [GroupCoordinator 1]: Stabilized group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-18 23:14:59,664] INFO [GroupCoordinator 1]: Assignment received from leader consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 for group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping grafana ... Stopping policy-api ... Stopping zookeeper ... Stopping simulator ... Stopping mariadb ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping policy-pap ... done Stopping simulator ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing grafana ... Removing policy-api ... Removing policy-db-migrator ... Removing zookeeper ... Removing simulator ... Removing mariadb ... Removing prometheus ... Removing policy-db-migrator ... done Removing zookeeper ... done Removing policy-api ... done Removing prometheus ... done Removing grafana ... done Removing policy-pap ... done Removing policy-apex-pdp ... done Removing simulator ... done Removing mariadb ... done Removing kafka ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.K0sYyH3Udx ]] + rsync -av /tmp/tmp.K0sYyH3Udx/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,822 bytes received 95 bytes 1,837,834.00 bytes/sec total size is 918,276 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2086 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7213542619536222831.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17685881978232348851.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7965212518294138523.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1225904667301443132.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9930047205394287472tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9696737471029010238.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10304794993552994526.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18375602438344878837.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5312943589767163300.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4579478435418763978.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1650 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-24270 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 851 25162 0 6152 30859 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:c2:5e:c9 brd ff:ff:ff:ff:ff:ff inet 10.30.106.211/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85932sec preferred_lft 85932sec inet6 fe80::f816:3eff:fec2:5ec9/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:31:29:96:20 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24270) 04/18/24 _x86_64_ (8 CPU) 23:10:22 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:02 131.49 36.34 95.15 1697.03 58766.28 23:13:01 146.14 23.39 122.76 2799.66 64597.46 23:14:01 214.41 0.38 214.03 47.73 105871.42 23:15:01 363.31 12.86 350.44 792.47 54556.99 23:16:01 6.47 0.00 6.47 0.00 156.84 23:17:01 11.33 0.08 11.25 9.60 1096.88 23:18:01 67.69 1.95 65.74 112.11 2697.75 Average: 134.38 10.69 123.69 775.00 41050.60 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:02 30150376 31722576 2788836 8.47 68784 1814488 1391952 4.10 845628 1651488 145184 23:13:01 29503744 31683932 3435468 10.43 90252 2379860 1546896 4.55 972120 2127584 371488 23:14:01 25871476 31642836 7067736 21.46 137064 5775780 1582264 4.66 1043064 5512024 1287296 23:15:01 23618360 29605984 9320852 28.30 157188 5939728 8761928 25.78 3258996 5455892 1716 23:16:01 23654660 29643056 9284552 28.19 157304 5940012 8678888 25.54 3224492 5454124 228 23:17:01 23685004 29699708 9254208 28.09 157712 5968224 8010568 23.57 3184292 5468460 248 23:18:01 25762312 31595044 7176900 21.79 159804 5800436 1547716 4.55 1321508 5312620 2416 Average: 26035133 30799019 6904079 20.96 132587 4802647 4502887 13.25 1978586 4426027 258368 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:02 ens3 81.86 59.16 883.06 11.30 0.00 0.00 0.00 0.00 23:12:02 lo 1.60 1.60 0.18 0.18 0.00 0.00 0.00 0.00 23:13:01 br-629c0108f165 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 117.39 84.71 2580.05 11.57 0.00 0.00 0.00 0.00 23:13:01 lo 5.22 5.22 0.50 0.50 0.00 0.00 0.00 0.00 23:14:01 br-629c0108f165 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 1138.38 499.87 28858.40 35.76 0.00 0.00 0.00 0.00 23:14:01 lo 8.27 8.27 0.81 0.81 0.00 0.00 0.00 0.00 23:15:01 veth4cdc924 69.14 84.49 41.43 20.59 0.00 0.00 0.00 0.00 23:15:01 veth74f5f70 0.13 0.47 0.01 0.03 0.00 0.00 0.00 0.00 23:15:01 veth5bda329 8.77 9.33 1.28 1.25 0.00 0.00 0.00 0.00 23:15:01 veth2dcc38d 0.55 0.88 0.06 0.31 0.00 0.00 0.00 0.00 23:16:01 veth4cdc924 31.01 37.86 37.83 12.25 0.00 0.00 0.00 0.00 23:16:01 veth74f5f70 0.50 0.47 0.05 1.48 0.00 0.00 0.00 0.00 23:16:01 veth5bda329 15.71 10.93 1.41 1.63 0.00 0.00 0.00 0.00 23:16:01 veth2dcc38d 0.23 0.18 0.02 0.01 0.00 0.00 0.00 0.00 23:17:01 veth4cdc924 0.22 0.30 0.11 0.08 0.00 0.00 0.00 0.00 23:17:01 veth5bda329 13.83 9.35 1.05 1.34 0.00 0.00 0.00 0.00 23:17:01 veth8701818 8.33 11.40 1.40 0.98 0.00 0.00 0.00 0.00 23:17:01 br-629c0108f165 4.27 4.68 1.98 2.19 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 ens3 1722.96 903.70 33075.45 146.62 0.00 0.00 0.00 0.00 23:18:01 lo 34.88 34.88 6.22 6.22 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 201.91 101.15 4633.65 13.80 0.00 0.00 0.00 0.00 Average: lo 4.44 4.44 0.84 0.84 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24270) 04/18/24 _x86_64_ (8 CPU) 23:10:22 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:02 all 9.40 0.00 0.80 4.66 0.13 85.01 23:12:02 0 2.62 0.00 0.25 0.17 0.00 96.97 23:12:02 1 27.04 0.00 1.62 3.29 0.03 68.02 23:12:02 2 21.29 0.00 1.20 0.83 0.05 76.63 23:12:02 3 0.42 0.00 0.35 19.75 0.02 79.46 23:12:02 4 9.91 0.00 0.92 1.49 0.03 87.65 23:12:02 5 6.41 0.00 0.77 0.72 0.02 92.09 23:12:02 6 4.82 0.00 0.89 0.47 0.02 93.80 23:12:02 7 2.79 0.00 0.38 10.62 0.81 85.39 23:13:01 all 9.80 0.00 1.00 4.76 0.04 84.41 23:13:01 0 2.49 0.00 0.53 0.17 0.03 96.78 23:13:01 1 3.76 0.00 0.77 0.17 0.03 95.27 23:13:01 2 17.39 0.00 1.09 2.66 0.03 78.83 23:13:01 3 0.27 0.00 0.41 28.01 0.03 71.27 23:13:01 4 27.78 0.00 1.80 2.93 0.07 67.42 23:13:01 5 8.15 0.00 0.97 0.39 0.03 90.46 23:13:01 6 13.68 0.00 1.41 1.35 0.05 83.51 23:13:01 7 4.95 0.00 1.03 2.42 0.03 91.56 23:14:01 all 12.09 0.00 5.66 8.02 0.07 74.16 23:14:01 0 11.18 0.00 6.00 0.85 0.07 81.90 23:14:01 1 10.68 0.00 6.67 33.65 0.07 48.94 23:14:01 2 12.95 0.00 5.36 0.09 0.09 81.52 23:14:01 3 11.85 0.00 5.79 22.82 0.07 59.48 23:14:01 4 10.18 0.00 3.66 0.41 0.07 85.69 23:14:01 5 14.34 0.00 5.58 1.55 0.07 78.47 23:14:01 6 12.58 0.00 5.60 4.67 0.07 77.08 23:14:01 7 12.96 0.00 6.64 0.30 0.07 80.03 23:15:01 all 29.55 0.00 4.33 4.09 0.08 61.96 23:15:01 0 23.63 0.00 4.04 1.55 0.07 70.71 23:15:01 1 34.87 0.00 4.46 2.85 0.08 57.73 23:15:01 2 32.15 0.00 4.23 1.22 0.08 62.32 23:15:01 3 28.56 0.00 4.84 17.77 0.08 48.75 23:15:01 4 26.51 0.00 3.97 1.14 0.07 68.31 23:15:01 5 31.38 0.00 4.16 1.84 0.08 62.54 23:15:01 6 33.22 0.00 4.53 2.62 0.08 59.54 23:15:01 7 26.09 0.00 4.39 3.75 0.07 65.70 23:16:01 all 4.78 0.00 0.42 0.02 0.04 94.73 23:16:01 0 4.14 0.00 0.30 0.00 0.03 95.53 23:16:01 1 5.11 0.00 0.45 0.00 0.03 94.41 23:16:01 2 3.86 0.00 0.42 0.02 0.07 95.64 23:16:01 3 4.68 0.00 0.52 0.02 0.03 94.76 23:16:01 4 4.11 0.00 0.42 0.10 0.05 95.32 23:16:01 5 5.08 0.00 0.40 0.03 0.02 94.47 23:16:01 6 4.95 0.00 0.31 0.00 0.05 94.69 23:16:01 7 6.34 0.00 0.53 0.02 0.05 93.06 23:17:01 all 1.42 0.00 0.32 0.12 0.04 98.10 23:17:01 0 1.07 0.00 0.30 0.02 0.05 98.57 23:17:01 1 1.25 0.00 0.27 0.00 0.03 98.45 23:17:01 2 2.39 0.00 0.47 0.05 0.07 97.03 23:17:01 3 1.92 0.00 0.30 0.07 0.03 97.68 23:17:01 4 1.08 0.00 0.30 0.38 0.05 98.18 23:17:01 5 1.07 0.00 0.33 0.02 0.03 98.55 23:17:01 6 1.16 0.00 0.29 0.23 0.03 98.29 23:17:01 7 1.39 0.00 0.33 0.23 0.05 97.99 23:18:01 all 6.81 0.00 0.63 0.39 0.03 92.14 23:18:01 0 2.49 0.00 0.62 0.03 0.02 96.84 23:18:01 1 10.96 0.00 0.68 0.20 0.03 88.12 23:18:01 2 15.12 0.00 0.75 0.22 0.03 83.88 23:18:01 3 1.52 0.00 0.48 0.08 0.03 97.88 23:18:01 4 2.42 0.00 0.62 2.12 0.05 94.79 23:18:01 5 5.14 0.00 0.58 0.12 0.02 94.15 23:18:01 6 15.78 0.00 0.85 0.07 0.05 83.25 23:18:01 7 1.10 0.00 0.48 0.27 0.02 98.13 Average: all 10.53 0.00 1.87 3.13 0.06 84.40 Average: 0 6.80 0.00 1.71 0.40 0.04 91.06 Average: 1 13.40 0.00 2.12 5.67 0.05 78.76 Average: 2 15.01 0.00 1.92 0.72 0.06 82.29 Average: 3 7.00 0.00 1.80 12.56 0.04 78.59 Average: 4 11.66 0.00 1.66 1.22 0.06 85.40 Average: 5 10.19 0.00 1.82 0.66 0.04 87.30 Average: 6 12.28 0.00 1.97 1.33 0.05 84.36 Average: 7 7.92 0.00 1.96 2.53 0.16 87.43