Started by upstream project "policy-docker-master-merge-java" build number 353 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137744 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-25963 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-iff53dAjxR3o/agent.2018 SSH_AGENT_PID=2020 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_16031382337261226370.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_16031382337261226370.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision b5981c8a48d21908d0ead6dc8d35b982c1917eb7 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f b5981c8a48d21908d0ead6dc8d35b982c1917eb7 # timeout=30 Commit message: "Release docker images for policy/docker: 3.1.2" > git rev-list --no-walk b6d1d479556b6798d8dfb70aa87db4e247327113 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10542353045183142744.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-kcYA lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-kcYA/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-kcYA/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.91 botocore==1.34.91 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins10256896111309753631.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins13936068077934339228.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.N0v6W0DKY8 ++ echo ROBOT_VENV=/tmp/tmp.N0v6W0DKY8 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.N0v6W0DKY8 ++ source /tmp/tmp.N0v6W0DKY8/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.N0v6W0DKY8 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.N0v6W0DKY8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.N0v6W0DKY8) ' '!=' x ']' +++ PS1='(tmp.N0v6W0DKY8) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.N0v6W0DKY8/src/onap ++ rm -rf /tmp/tmp.N0v6W0DKY8/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.N0v6W0DKY8/bin/activate + '[' -z /tmp/tmp.N0v6W0DKY8/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.N0v6W0DKY8/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.N0v6W0DKY8 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.N0v6W0DKY8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.N0v6W0DKY8) ' ++ '[' 'x(tmp.N0v6W0DKY8) ' '!=' x ']' ++ PS1='(tmp.N0v6W0DKY8) (tmp.N0v6W0DKY8) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.WRjITlmLsr + cd /tmp/tmp.WRjITlmLsr + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:c9ada35f340eeba61a8080b879d13c9b352efc0ce18da57ad5994ffee132c60f Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating mariadb ... Creating zookeeper ... Creating prometheus ... Creating simulator ... Creating zookeeper ... done Creating kafka ... Creating simulator ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating mariadb ... done Creating policy-db-migrator ... Creating kafka ... done Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 12 seconds grafana Up 16 seconds kafka Up 14 seconds simulator Up 18 seconds prometheus Up 17 seconds zookeeper Up 19 seconds mariadb Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 17 seconds grafana Up 21 seconds kafka Up 19 seconds simulator Up 23 seconds prometheus Up 22 seconds zookeeper Up 24 seconds mariadb Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 22 seconds grafana Up 26 seconds kafka Up 24 seconds simulator Up 28 seconds prometheus Up 27 seconds zookeeper Up 29 seconds mariadb Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 27 seconds grafana Up 31 seconds kafka Up 29 seconds simulator Up 33 seconds prometheus Up 32 seconds zookeeper Up 34 seconds mariadb Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 36 seconds kafka Up 34 seconds simulator Up 38 seconds prometheus Up 37 seconds zookeeper Up 39 seconds mariadb Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds grafana Up 41 seconds kafka Up 39 seconds simulator Up 43 seconds prometheus Up 42 seconds zookeeper Up 44 seconds mariadb Up 40 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 10:42:14 up 4 min, 0 users, load average: 2.80, 1.22, 0.48 Tasks: 212 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.2 us, 2.9 sy, 0.0 ni, 79.7 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.4G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 37 seconds grafana Up 41 seconds kafka Up 39 seconds simulator Up 43 seconds prometheus Up 42 seconds zookeeper Up 44 seconds mariadb Up 40 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0cd24a47e995 policy-apex-pdp 6.90% 169.1MiB / 31.41GiB 0.53% 9.12kB / 8.53kB 0B / 0B 48 01840dcf12ac policy-pap 11.15% 518.5MiB / 31.41GiB 1.61% 34.2kB / 35.6kB 0B / 149MB 62 2e4e0e945ab7 policy-api 0.12% 481.6MiB / 31.41GiB 1.50% 988kB / 647kB 0B / 0B 52 4fb0cd9c0d59 grafana 0.04% 54.38MiB / 31.41GiB 0.17% 19.1kB / 3.46kB 0B / 24.9MB 17 292d789ad7a2 kafka 6.52% 382.7MiB / 31.41GiB 1.19% 73.6kB / 77.5kB 45.1kB / 508kB 85 36e4808d58e1 simulator 0.09% 120.9MiB / 31.41GiB 0.38% 1.45kB / 0B 225kB / 0B 76 040d0b332f90 prometheus 0.23% 18.55MiB / 31.41GiB 0.06% 1.64kB / 474B 0B / 0B 11 344146e5e708 zookeeper 0.08% 99MiB / 31.41GiB 0.31% 56.9kB / 49.5kB 4.1kB / 385kB 60 155039a0244e mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 935kB / 1.18MB 10.8MB / 53.8MB 40 + echo + cd /tmp/tmp.WRjITlmLsr + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.WRjITlmLsr/output.xml Log: /tmp/tmp.WRjITlmLsr/log.html Report: /tmp/tmp.WRjITlmLsr/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes mariadb Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 10:44:04 up 6 min, 0 users, load average: 0.74, 1.01, 0.49 Tasks: 203 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.1 us, 2.2 sy, 0.0 ni, 84.1 id, 2.4 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.4G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes mariadb Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0cd24a47e995 policy-apex-pdp 0.54% 180.9MiB / 31.41GiB 0.56% 56.7kB / 91.1kB 0B / 0B 52 01840dcf12ac policy-pap 0.88% 472.8MiB / 31.41GiB 1.47% 2.47MB / 1.04MB 0B / 149MB 66 2e4e0e945ab7 policy-api 0.10% 528.1MiB / 31.41GiB 1.64% 2.45MB / 1.1MB 0B / 0B 55 4fb0cd9c0d59 grafana 0.03% 56.33MiB / 31.41GiB 0.18% 19.8kB / 4.41kB 0B / 24.9MB 17 292d789ad7a2 kafka 9.17% 391.5MiB / 31.41GiB 1.22% 241kB / 216kB 45.1kB / 606kB 85 36e4808d58e1 simulator 0.08% 121.1MiB / 31.41GiB 0.38% 1.67kB / 0B 225kB / 0B 78 040d0b332f90 prometheus 0.32% 24.66MiB / 31.41GiB 0.08% 188kB / 11.1kB 0B / 0B 13 344146e5e708 zookeeper 0.22% 99.02MiB / 31.41GiB 0.31% 59.8kB / 51.1kB 4.1kB / 385kB 60 155039a0244e mariadb 0.02% 103.3MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 10.8MB / 54.1MB 28 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, simulator, prometheus, zookeeper, mariadb grafana | logger=settings t=2024-04-25T10:41:33.508718444Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T10:41:33Z grafana | logger=settings t=2024-04-25T10:41:33.509058252Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-25T10:41:33.509095723Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-25T10:41:33.509123043Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-25T10:41:33.509149374Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-25T10:41:33.509177965Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T10:41:33.509227196Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T10:41:33.509267547Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-25T10:41:33.509313618Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-25T10:41:33.509365339Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-25T10:41:33.50941114Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-25T10:41:33.509445591Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-25T10:41:33.509507302Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-25T10:41:33.509544593Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-25T10:41:33.509573853Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-25T10:41:33.509614834Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-25T10:41:33.509639825Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-25T10:41:33.509690446Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-25T10:41:33.509722007Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-25T10:41:33.510051134Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-25T10:41:33.510102245Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-25T10:41:33.51074536Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-25T10:41:33.511729783Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-25T10:41:33.512638683Z level=info msg="Migration successfully executed" id="create migration_log table" duration=909.65µs grafana | logger=migrator t=2024-04-25T10:41:33.521858203Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-25T10:41:33.522760664Z level=info msg="Migration successfully executed" id="create user table" duration=904.511µs grafana | logger=migrator t=2024-04-25T10:41:33.526775575Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-25T10:41:33.527617424Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=846.009µs grafana | logger=migrator t=2024-04-25T10:41:33.532144367Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-25T10:41:33.533495479Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.351531ms grafana | logger=migrator t=2024-04-25T10:41:33.539882324Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-25T10:41:33.54106449Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.182666ms grafana | logger=migrator t=2024-04-25T10:41:33.545367369Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-25T10:41:33.546506934Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.140335ms grafana | logger=migrator t=2024-04-25T10:41:33.550467495Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-25T10:41:33.552851298Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.380933ms grafana | logger=migrator t=2024-04-25T10:41:33.559003539Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-25T10:41:33.56039623Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.392331ms grafana | logger=migrator t=2024-04-25T10:41:33.564348531Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-25T10:41:33.565546138Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.198127ms grafana | logger=migrator t=2024-04-25T10:41:33.568955605Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-25T10:41:33.569684812Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=729.007µs grafana | logger=migrator t=2024-04-25T10:41:33.573146131Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:33.573601741Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=454.93µs grafana | logger=migrator t=2024-04-25T10:41:33.579518186Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-25T10:41:33.580430666Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=911.77µs grafana | logger=migrator t=2024-04-25T10:41:33.584212093Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-25T10:41:33.586062255Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.854582ms grafana | logger=migrator t=2024-04-25T10:41:33.589852622Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-25T10:41:33.589879882Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.88µs grafana | logger=migrator t=2024-04-25T10:41:33.595618333Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-25T10:41:33.596758098Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.135755ms grafana | logger=migrator t=2024-04-25T10:41:33.600177176Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-25T10:41:33.600668608Z level=info msg="Migration successfully executed" id="Add missing user data" duration=490.812µs grafana | logger=migrator t=2024-04-25T10:41:33.604410513Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-04-25T10:41:33.60646386Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.817932ms grafana | logger=migrator t=2024-04-25T10:41:33.610121393Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-25T10:41:33.61132046Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.207697ms grafana | logger=migrator t=2024-04-25T10:41:33.617194493Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-25T10:41:33.61834556Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.150417ms grafana | logger=migrator t=2024-04-25T10:41:33.621481501Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-25T10:41:33.629231208Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.749067ms grafana | logger=migrator t=2024-04-25T10:41:33.632794209Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-25T10:41:33.633962135Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.167336ms grafana | logger=migrator t=2024-04-25T10:41:33.637333163Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-25T10:41:33.63765289Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=319.837µs grafana | logger=migrator t=2024-04-25T10:41:33.643453782Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-25T10:41:33.644649829Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.180066ms grafana | logger=migrator t=2024-04-25T10:41:33.648372854Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-25T10:41:33.648941647Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=567.793µs grafana | logger=migrator t=2024-04-25T10:41:33.652773214Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-25T10:41:33.653641503Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=864.759µs grafana | logger=migrator t=2024-04-25T10:41:33.659348424Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-25T10:41:33.66052724Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.177756ms grafana | logger=migrator t=2024-04-25T10:41:33.664353767Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-25T10:41:33.665565865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.211948ms grafana | logger=migrator t=2024-04-25T10:41:33.669705889Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-25T10:41:33.670489527Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=786.268µs grafana | logger=migrator t=2024-04-25T10:41:33.676947584Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-25T10:41:33.677913456Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=964.802µs grafana | logger=migrator t=2024-04-25T10:41:33.681651481Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-25T10:41:33.681689582Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=39.431µs grafana | logger=migrator t=2024-04-25T10:41:33.685368636Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-25T10:41:33.686118093Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=750.657µs grafana | logger=migrator t=2024-04-25T10:41:33.689423438Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-25T10:41:33.690187076Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=764.108µs grafana | logger=migrator t=2024-04-25T10:41:33.696226363Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-25T10:41:33.697355769Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.129516ms grafana | logger=migrator t=2024-04-25T10:41:33.701088373Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-25T10:41:33.702153918Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.065585ms grafana | logger=migrator t=2024-04-25T10:41:33.708204726Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T10:41:33.711256655Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.052869ms grafana | logger=migrator t=2024-04-25T10:41:33.714892538Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-25T10:41:33.715799229Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=906.201µs grafana | logger=migrator t=2024-04-25T10:41:33.719190826Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-25T10:41:33.719974254Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=783.428µs grafana | logger=migrator t=2024-04-25T10:41:33.723281159Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-25T10:41:33.725157742Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.874963ms grafana | logger=migrator t=2024-04-25T10:41:33.730625136Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-25T10:41:33.731438305Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=813.349µs grafana | logger=migrator t=2024-04-25T10:41:33.734489854Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-25T10:41:33.735047598Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=557.363µs grafana | logger=migrator t=2024-04-25T10:41:33.737969994Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:33.73825117Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=282.186µs grafana | logger=migrator t=2024-04-25T10:41:33.741275429Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-25T10:41:33.741704329Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=429.25µs grafana | logger=migrator t=2024-04-25T10:41:33.746899567Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-04-25T10:41:33.747284136Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=384.589µs grafana | logger=migrator t=2024-04-25T10:41:33.750139551Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-25T10:41:33.750837066Z level=info msg="Migration successfully executed" id="create star table" duration=697.915µs grafana | logger=migrator t=2024-04-25T10:41:33.753842425Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-25T10:41:33.754609453Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=766.798µs grafana | logger=migrator t=2024-04-25T10:41:33.782955838Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-25T10:41:33.784627866Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.671398ms grafana | logger=migrator t=2024-04-25T10:41:33.78828056Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-25T10:41:33.789448626Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.169247ms grafana | logger=migrator t=2024-04-25T10:41:33.793562679Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-25T10:41:33.794261176Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=698.026µs grafana | logger=migrator t=2024-04-25T10:41:33.798260376Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-25T10:41:33.799020683Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=759.937µs grafana | logger=migrator t=2024-04-25T10:41:33.804288843Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-25T10:41:33.805075022Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=785.579µs grafana | logger=migrator t=2024-04-25T10:41:33.808301035Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-25T10:41:33.809300168Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=998.702µs grafana | logger=migrator t=2024-04-25T10:41:33.81250693Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-25T10:41:33.812531901Z level=info msg="Migration successfully executed" id="Update org table charset" duration=25.671µs grafana | logger=migrator t=2024-04-25T10:41:33.814973407Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-25T10:41:33.814998367Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.5µs grafana | logger=migrator t=2024-04-25T10:41:33.821538066Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-25T10:41:33.82170882Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=169.904µs grafana | logger=migrator t=2024-04-25T10:41:33.82474359Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-25T10:41:33.825519557Z level=info msg="Migration successfully executed" id="create dashboard table" duration=773.017µs grafana | logger=migrator t=2024-04-25T10:41:33.829120609Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-25T10:41:33.830280536Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.159267ms grafana | logger=migrator t=2024-04-25T10:41:33.833480548Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-25T10:41:33.834556143Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.073645ms grafana | logger=migrator t=2024-04-25T10:41:33.840220682Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-25T10:41:33.840936668Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=715.947µs grafana | logger=migrator t=2024-04-25T10:41:33.844094489Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-25T10:41:33.844664133Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=569.684µs grafana | logger=migrator t=2024-04-25T10:41:33.847653511Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-25T10:41:33.848155342Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=501.921µs grafana | logger=migrator t=2024-04-25T10:41:33.853727379Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-25T10:41:33.859338457Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.612208ms grafana | logger=migrator t=2024-04-25T10:41:33.863106592Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-25T10:41:33.864057005Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=951.523µs grafana | logger=migrator t=2024-04-25T10:41:33.867232677Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-25T10:41:33.868997816Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.763989ms grafana | logger=migrator t=2024-04-25T10:41:33.874795498Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-25T10:41:33.875622848Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=847.93µs grafana | logger=migrator t=2024-04-25T10:41:33.878936263Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:33.879380663Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=444.01µs grafana | logger=migrator t=2024-04-25T10:41:33.882354061Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-04-25T10:41:33.883316022Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=961.721µs grafana | logger=migrator t=2024-04-25T10:41:33.889164086Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-04-25T10:41:33.889226858Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=62.992µs grafana | logger=migrator t=2024-04-25T10:41:33.893482914Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T10:41:33.895315866Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.832652ms grafana | logger=migrator t=2024-04-25T10:41:33.898607451Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-04-25T10:41:33.900351801Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.7441ms grafana | logger=migrator t=2024-04-25T10:41:33.90600697Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.90998139Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.97752ms grafana | logger=migrator t=2024-04-25T10:41:33.913758416Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.914527014Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=767.828µs grafana | logger=migrator t=2024-04-25T10:41:33.917583623Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.919401734Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.817381ms grafana | logger=migrator t=2024-04-25T10:41:33.924998451Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.92578638Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=787.449µs grafana | logger=migrator t=2024-04-25T10:41:33.928957172Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-04-25T10:41:33.929654948Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=699.176µs grafana | logger=migrator t=2024-04-25T10:41:33.932577204Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-04-25T10:41:33.932636986Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=60.292µs grafana | logger=migrator t=2024-04-25T10:41:33.935117002Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-04-25T10:41:33.935141433Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.981µs grafana | logger=migrator t=2024-04-25T10:41:33.940489305Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.942442069Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.952104ms grafana | logger=migrator t=2024-04-25T10:41:33.945591691Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.947539885Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.947794ms grafana | logger=migrator t=2024-04-25T10:41:33.950517532Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.952405436Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.887394ms grafana | logger=migrator t=2024-04-25T10:41:33.957694816Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.960190563Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.480637ms grafana | logger=migrator t=2024-04-25T10:41:33.964682325Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-04-25T10:41:33.96489025Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=207.135µs grafana | logger=migrator t=2024-04-25T10:41:33.968185505Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-04-25T10:41:33.968944252Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=758.127µs grafana | logger=migrator t=2024-04-25T10:41:33.973526637Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-04-25T10:41:33.974214872Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=687.235µs grafana | logger=migrator t=2024-04-25T10:41:33.97852771Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-04-25T10:41:33.978553791Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=26.661µs grafana | logger=migrator t=2024-04-25T10:41:33.983169266Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-04-25T10:41:33.984693431Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.521815ms grafana | logger=migrator t=2024-04-25T10:41:33.989822487Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-04-25T10:41:33.990939393Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.117696ms grafana | logger=migrator t=2024-04-25T10:41:33.995420165Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-25T10:41:34.0009038Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.480374ms grafana | logger=migrator t=2024-04-25T10:41:34.00572802Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-04-25T10:41:34.006435226Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=706.855µs grafana | logger=migrator t=2024-04-25T10:41:34.011778824Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-04-25T10:41:34.013456936Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.677432ms grafana | logger=migrator t=2024-04-25T10:41:34.017603311Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-04-25T10:41:34.018358271Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=753.73µs grafana | logger=migrator t=2024-04-25T10:41:34.02270766Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:34.023011867Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=304.077µs grafana | logger=migrator t=2024-04-25T10:41:34.026396664Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-04-25T10:41:34.026938938Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=541.954µs grafana | logger=migrator t=2024-04-25T10:41:34.032308164Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-04-25T10:41:34.034435177Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.126242ms grafana | logger=migrator t=2024-04-25T10:41:34.03852464Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-04-25T10:41:34.040293244Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.748764ms grafana | logger=migrator t=2024-04-25T10:41:34.044969983Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-04-25T10:41:34.045130777Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=160.984µs grafana | logger=migrator t=2024-04-25T10:41:34.04841352Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-04-25T10:41:34.048598614Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=185.044µs grafana | logger=migrator t=2024-04-25T10:41:34.051267382Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-04-25T10:41:34.051998041Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=729.969µs grafana | logger=migrator t=2024-04-25T10:41:34.055275594Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-04-25T10:41:34.057374147Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.098063ms grafana | logger=migrator t=2024-04-25T10:41:34.061724276Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-04-25T10:41:34.062591318Z level=info msg="Migration successfully executed" id="create data_source table" duration=866.822µs grafana | logger=migrator t=2024-04-25T10:41:34.066342393Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-04-25T10:41:34.067441641Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.097568ms grafana | logger=migrator t=2024-04-25T10:41:34.071765611Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-04-25T10:41:34.072610812Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=844.971µs grafana | logger=migrator t=2024-04-25T10:41:34.076746366Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-04-25T10:41:34.077443944Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=695.928µs grafana | logger=migrator t=2024-04-25T10:41:34.108296654Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-04-25T10:41:34.10974177Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.445656ms grafana | logger=migrator t=2024-04-25T10:41:34.113634089Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-04-25T10:41:34.119836045Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.200656ms grafana | logger=migrator t=2024-04-25T10:41:34.124150634Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-04-25T10:41:34.125064648Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=913.623µs grafana | logger=migrator t=2024-04-25T10:41:34.128400592Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-04-25T10:41:34.129198401Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=795.019µs grafana | logger=migrator t=2024-04-25T10:41:34.133453189Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-04-25T10:41:34.13427223Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=818.601µs grafana | logger=migrator t=2024-04-25T10:41:34.138102186Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-04-25T10:41:34.138632541Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=530.164µs grafana | logger=migrator t=2024-04-25T10:41:34.141920773Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-04-25T10:41:34.144226972Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.304949ms grafana | logger=migrator t=2024-04-25T10:41:34.14850706Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-04-25T10:41:34.150766667Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.261138ms grafana | logger=migrator t=2024-04-25T10:41:34.154207584Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-04-25T10:41:34.154237634Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=30.73µs grafana | logger=migrator t=2024-04-25T10:41:34.157553858Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-04-25T10:41:34.157814464Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=260.306µs grafana | logger=migrator t=2024-04-25T10:41:34.161402486Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-04-25T10:41:34.163748475Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.343749ms grafana | logger=migrator t=2024-04-25T10:41:34.167829988Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-04-25T10:41:34.168035133Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=205.165µs grafana | logger=migrator t=2024-04-25T10:41:34.171330166Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-04-25T10:41:34.17147763Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=147.694µs grafana | logger=migrator t=2024-04-25T10:41:34.174987039Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-04-25T10:41:34.17860469Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.651012ms grafana | logger=migrator t=2024-04-25T10:41:34.182464448Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-04-25T10:41:34.182721264Z level=info msg="Migration successfully executed" id="Update uid value" duration=256.636µs grafana | logger=migrator t=2024-04-25T10:41:34.186988492Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-04-25T10:41:34.187780612Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=791.73µs grafana | logger=migrator t=2024-04-25T10:41:34.19126811Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-04-25T10:41:34.19204506Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=778.03µs grafana | logger=migrator t=2024-04-25T10:41:34.1956245Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-04-25T10:41:34.19639343Z level=info msg="Migration successfully executed" id="create api_key table" duration=767.14µs grafana | logger=migrator t=2024-04-25T10:41:34.200685079Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-04-25T10:41:34.201469148Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=783.22µs grafana | logger=migrator t=2024-04-25T10:41:34.204922665Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-04-25T10:41:34.205661955Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=738.309µs grafana | logger=migrator t=2024-04-25T10:41:34.209372128Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-04-25T10:41:34.210404384Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.030896ms grafana | logger=migrator t=2024-04-25T10:41:34.214743014Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-04-25T10:41:34.215509803Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=766.389µs grafana | logger=migrator t=2024-04-25T10:41:34.219005101Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-04-25T10:41:34.21971517Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=709.159µs grafana | logger=migrator t=2024-04-25T10:41:34.22411951Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-04-25T10:41:34.224863519Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=743.499µs grafana | logger=migrator t=2024-04-25T10:41:34.228633805Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-04-25T10:41:34.237236212Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.580057ms grafana | logger=migrator t=2024-04-25T10:41:34.240611227Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-04-25T10:41:34.241450688Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=840.201µs grafana | logger=migrator t=2024-04-25T10:41:34.245613324Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-04-25T10:41:34.246139317Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=525.253µs grafana | logger=migrator t=2024-04-25T10:41:34.24941846Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-04-25T10:41:34.249946573Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=527.333µs grafana | logger=migrator t=2024-04-25T10:41:34.252912258Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-04-25T10:41:34.253651547Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=736.379µs grafana | logger=migrator t=2024-04-25T10:41:34.258086469Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:34.258404727Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=319.057µs grafana | logger=migrator t=2024-04-25T10:41:34.263420344Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-04-25T10:41:34.264538672Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.118369ms grafana | logger=migrator t=2024-04-25T10:41:34.269060536Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-04-25T10:41:34.269087887Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=28.641µs grafana | logger=migrator t=2024-04-25T10:41:34.273602531Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-04-25T10:41:34.276029033Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.426082ms grafana | logger=migrator t=2024-04-25T10:41:34.279333006Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-04-25T10:41:34.281732177Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.398821ms grafana | logger=migrator t=2024-04-25T10:41:34.284973358Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-04-25T10:41:34.285127062Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=153.414µs grafana | logger=migrator t=2024-04-25T10:41:34.289682358Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-04-25T10:41:34.292886649Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.203841ms grafana | logger=migrator t=2024-04-25T10:41:34.296393077Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-04-25T10:41:34.298932231Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.538484ms grafana | logger=migrator t=2024-04-25T10:41:34.30244163Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-04-25T10:41:34.303221469Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=779.589µs grafana | logger=migrator t=2024-04-25T10:41:34.308148524Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-04-25T10:41:34.308683307Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=535.153µs grafana | logger=migrator t=2024-04-25T10:41:34.312281409Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-04-25T10:41:34.313095519Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=814.4µs grafana | logger=migrator t=2024-04-25T10:41:34.316469124Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-04-25T10:41:34.317425028Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=955.554µs grafana | logger=migrator t=2024-04-25T10:41:34.32184495Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-04-25T10:41:34.32264239Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=799.05µs grafana | logger=migrator t=2024-04-25T10:41:34.326214291Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-04-25T10:41:34.32698767Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=773.229µs grafana | logger=migrator t=2024-04-25T10:41:34.330103419Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-04-25T10:41:34.33016631Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=62.901µs grafana | logger=migrator t=2024-04-25T10:41:34.333550226Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-04-25T10:41:34.333575696Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=25.78µs grafana | logger=migrator t=2024-04-25T10:41:34.337877376Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-04-25T10:41:34.340495902Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.618207ms grafana | logger=migrator t=2024-04-25T10:41:34.344019991Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-04-25T10:41:34.347380815Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.360444ms grafana | logger=migrator t=2024-04-25T10:41:34.351010627Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-04-25T10:41:34.351075328Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=65.171µs grafana | logger=migrator t=2024-04-25T10:41:34.35547472Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-04-25T10:41:34.356158707Z level=info msg="Migration successfully executed" id="create quota table v1" duration=683.807µs grafana | logger=migrator t=2024-04-25T10:41:34.359615734Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-04-25T10:41:34.360370454Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=754.25µs grafana | logger=migrator t=2024-04-25T10:41:34.363989715Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-04-25T10:41:34.364015156Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=27.111µs grafana | logger=migrator t=2024-04-25T10:41:34.367381721Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-04-25T10:41:34.36811364Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=732.559µs grafana | logger=migrator t=2024-04-25T10:41:34.372422918Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-04-25T10:41:34.373468874Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.045536ms grafana | logger=migrator t=2024-04-25T10:41:34.37684506Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-04-25T10:41:34.379728663Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.882823ms grafana | logger=migrator t=2024-04-25T10:41:34.383219791Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-04-25T10:41:34.383244262Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.721µs grafana | logger=migrator t=2024-04-25T10:41:34.387671493Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-04-25T10:41:34.388455754Z level=info msg="Migration successfully executed" id="create session table" duration=783.361µs grafana | logger=migrator t=2024-04-25T10:41:34.392012393Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-04-25T10:41:34.392151067Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=137.404µs grafana | logger=migrator t=2024-04-25T10:41:34.396057215Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-04-25T10:41:34.396188749Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=131.594µs grafana | logger=migrator t=2024-04-25T10:41:34.400817376Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-04-25T10:41:34.40177318Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=955.284µs grafana | logger=migrator t=2024-04-25T10:41:34.405228938Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-04-25T10:41:34.405973176Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=743.519µs grafana | logger=migrator t=2024-04-25T10:41:34.419821806Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-04-25T10:41:34.419843457Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.981µs grafana | logger=migrator t=2024-04-25T10:41:34.427221673Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-04-25T10:41:34.427289295Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=74.162µs grafana | logger=migrator t=2024-04-25T10:41:34.479541195Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-04-25T10:41:34.484429469Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.892844ms grafana | logger=migrator t=2024-04-25T10:41:34.4880329Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-04-25T10:41:34.490989314Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.958355ms grafana | logger=migrator t=2024-04-25T10:41:34.496311369Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-04-25T10:41:34.496388001Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=76.502µs grafana | logger=migrator t=2024-04-25T10:41:34.499026157Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-04-25T10:41:34.499102249Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=76.972µs grafana | logger=migrator t=2024-04-25T10:41:34.502484745Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-04-25T10:41:34.503350777Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=866.142µs grafana | logger=migrator t=2024-04-25T10:41:34.506526657Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-04-25T10:41:34.506546228Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=19.891µs grafana | logger=migrator t=2024-04-25T10:41:34.511840722Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-04-25T10:41:34.516332735Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.488012ms grafana | logger=migrator t=2024-04-25T10:41:34.519962347Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-04-25T10:41:34.520193363Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=228.196µs grafana | logger=migrator t=2024-04-25T10:41:34.523279831Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-04-25T10:41:34.526385719Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.104978ms grafana | logger=migrator t=2024-04-25T10:41:34.53198017Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-04-25T10:41:34.534967636Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.986906ms grafana | logger=migrator t=2024-04-25T10:41:34.538453814Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-04-25T10:41:34.538514815Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=61.211µs grafana | logger=migrator t=2024-04-25T10:41:34.549473522Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-04-25T10:41:34.551271278Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.798096ms grafana | logger=migrator t=2024-04-25T10:41:34.558692296Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-04-25T10:41:34.559885826Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.19737ms grafana | logger=migrator t=2024-04-25T10:41:34.563549808Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-04-25T10:41:34.564739539Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.189031ms grafana | logger=migrator t=2024-04-25T10:41:34.569284493Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-04-25T10:41:34.570214957Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=930.544µs grafana | logger=migrator t=2024-04-25T10:41:34.574348151Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-04-25T10:41:34.575414709Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.065537ms grafana | logger=migrator t=2024-04-25T10:41:34.580364233Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-04-25T10:41:34.581692547Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.328064ms grafana | logger=migrator t=2024-04-25T10:41:34.58970909Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-04-25T10:41:34.590978511Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.268561ms grafana | logger=migrator t=2024-04-25T10:41:34.596547562Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-04-25T10:41:34.59801477Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.466638ms grafana | logger=migrator t=2024-04-25T10:41:34.602555344Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-04-25T10:41:34.604082203Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.527869ms grafana | logger=migrator t=2024-04-25T10:41:34.611586452Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-04-25T10:41:34.625115444Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.529882ms grafana | logger=migrator t=2024-04-25T10:41:34.631722121Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-04-25T10:41:34.63245237Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=730.909µs grafana | logger=migrator t=2024-04-25T10:41:34.640016421Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-04-25T10:41:34.641081518Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.068587ms grafana | logger=migrator t=2024-04-25T10:41:34.647153041Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-04-25T10:41:34.647417027Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=264.646µs grafana | logger=migrator t=2024-04-25T10:41:34.652586098Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-04-25T10:41:34.653108331Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=522.583µs grafana | logger=migrator t=2024-04-25T10:41:34.659996286Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-04-25T10:41:34.660769265Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=772.779µs grafana | logger=migrator t=2024-04-25T10:41:34.667261419Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-04-25T10:41:34.671550168Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.290129ms grafana | logger=migrator t=2024-04-25T10:41:34.674695267Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-04-25T10:41:34.678158054Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.462357ms grafana | logger=migrator t=2024-04-25T10:41:34.68393553Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-04-25T10:41:34.687357897Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.422676ms grafana | logger=migrator t=2024-04-25T10:41:34.692290291Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-04-25T10:41:34.696111469Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.823968ms grafana | logger=migrator t=2024-04-25T10:41:34.699362781Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-04-25T10:41:34.699954025Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=591.074µs grafana | logger=migrator t=2024-04-25T10:41:34.702864859Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-04-25T10:41:34.702886429Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=24.78µs grafana | logger=migrator t=2024-04-25T10:41:34.708508561Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-04-25T10:41:34.708549093Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=41.731µs grafana | logger=migrator t=2024-04-25T10:41:34.714826231Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-04-25T10:41:34.716051112Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.224661ms grafana | logger=migrator t=2024-04-25T10:41:34.721560522Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-04-25T10:41:34.723186962Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.62937ms grafana | logger=migrator t=2024-04-25T10:41:34.727002299Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-04-25T10:42:12.227+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-25T10:42:12.406+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 76090dad-2cb8-4045-86c4-b86ef46522aa policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-04-25T10:42:12.577+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-25T10:42:12.578+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-25T10:42:12.578+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041732576 kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-25 10:41:39,240] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:host.name=292d789ad7a2 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,240] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,241] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,243] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,246] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-25 10:41:39,250] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-25 10:41:39,258] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:39,280] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:39,281] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:39,290] INFO Socket connection established, initiating session, client: /172.17.0.6:44948, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:39,335] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003605a0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:39,464] INFO Session: 0x1000003605a0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:39,464] INFO EventThread shut down for session: 0x1000003605a0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... grafana | logger=migrator t=2024-04-25T10:41:34.728100927Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.099038ms grafana | logger=migrator t=2024-04-25T10:41:34.73378269Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-04-25T10:41:34.735034751Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.251351ms grafana | logger=migrator t=2024-04-25T10:41:34.738545951Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-04-25T10:41:34.739398662Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=855.091µs grafana | logger=migrator t=2024-04-25T10:41:34.742408448Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-04-25T10:41:34.746078481Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.670962ms grafana | logger=migrator t=2024-04-25T10:41:34.751098768Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-04-25T10:41:34.754797942Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.701785ms grafana | logger=migrator t=2024-04-25T10:41:34.758098435Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-04-25T10:41:34.758357591Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=260.146µs grafana | logger=migrator t=2024-04-25T10:41:34.764187889Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-04-25T10:41:34.766042466Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.854567ms grafana | logger=migrator t=2024-04-25T10:41:34.769657837Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-04-25T10:41:34.770434176Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=776.309µs grafana | logger=migrator t=2024-04-25T10:41:34.773497434Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-04-25T10:41:34.777225309Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.727614ms grafana | logger=migrator t=2024-04-25T10:41:34.781794683Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-04-25T10:41:34.781860595Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=66.512µs grafana | logger=migrator t=2024-04-25T10:41:34.813841424Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-04-25T10:41:34.814632833Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=791.579µs grafana | logger=migrator t=2024-04-25T10:41:34.818685425Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-04-25T10:41:34.819605059Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=919.704µs grafana | logger=migrator t=2024-04-25T10:41:34.824709038Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-04-25T10:41:34.82478774Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=78.723µs grafana | logger=migrator t=2024-04-25T10:41:34.827352964Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-04-25T10:41:34.828231567Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=878.423µs grafana | logger=migrator t=2024-04-25T10:41:34.831188362Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-04-25T10:41:34.832009192Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=817.64µs grafana | logger=migrator t=2024-04-25T10:41:34.838105626Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-04-25T10:41:34.839216065Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.113499ms grafana | logger=migrator t=2024-04-25T10:41:34.843149164Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-04-25T10:41:34.84418963Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.039896ms grafana | logger=migrator t=2024-04-25T10:41:34.848751866Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-04-25T10:41:34.850342946Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.590671ms grafana | logger=migrator t=2024-04-25T10:41:34.855862566Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-04-25T10:41:34.857831895Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.967099ms grafana | logger=migrator t=2024-04-25T10:41:34.863046397Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-04-25T10:41:34.863073407Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.29µs grafana | logger=migrator t=2024-04-25T10:41:34.867551261Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-04-25T10:41:34.871949591Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.39795ms grafana | logger=migrator t=2024-04-25T10:41:34.88059306Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-04-25T10:41:34.881422091Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=829.021µs grafana | logger=migrator t=2024-04-25T10:41:34.887602097Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-04-25T10:41:34.894511112Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.906494ms policy-apex-pdp | [2024-04-25T10:42:12.581+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-1, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-04-25T10:42:12.593+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-04-25T10:42:12.593+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-04-25T10:42:12.595+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=76090dad-2cb8-4045-86c4-b86ef46522aa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-04-25T10:42:12.614+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2 policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-04-25T10:41:34.906750522Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-04-25T10:41:34.907453629Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=703.398µs grafana | logger=migrator t=2024-04-25T10:41:34.91221177Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-04-25T10:41:34.914099067Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.886897ms grafana | logger=migrator t=2024-04-25T10:41:34.91858763Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-04-25T10:41:34.91975598Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.168629ms kafka | [2024-04-25 10:41:40,110] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-25 10:41:40,428] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-25 10:41:40,494] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-25 10:41:40,495] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-25 10:41:40,496] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-25 10:41:40,508] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 10:41:40,511] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:host.name=292d789ad7a2 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 76090dad-2cb8-4045-86c4-b86ef46522aa policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,511] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,512] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,513] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-25 10:41:40,516] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-25 10:41:40,521] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:40,523] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 10:41:40,528] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:40,536] INFO Socket connection established, initiating session, client: /172.17.0.6:44950, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:40,549] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003605a0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-25 10:41:40,558] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-25 10:41:40,848] INFO Cluster ID = WEMOaayeQ5uYZKGI5dj_vQ (kafka.server.KafkaServer) kafka | [2024-04-25 10:41:40,850] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-04-25 10:41:40,894] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null grafana | logger=migrator t=2024-04-25T10:41:34.924362846Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" mariadb | 2024-04-25 10:41:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. kafka | controller.quorum.append.linger.ms = 25 policy-api | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-04-25T10:41:34.935634301Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.271145ms policy-apex-pdp | sasl.oauthbearer.expected.audience = null mariadb | 2024-04-25 10:41:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' kafka | controller.quorum.election.backoff.max.ms = 1000 policy-api | mariadb (172.17.0.3:3306) open policy-db-migrator | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-04-25T10:41:34.940453163Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | Waiting for mariadb port 3306... mariadb | 2024-04-25 10:41:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. kafka | controller.quorum.election.timeout.ms = 1000 policy-api | Waiting for policy-db-migrator port 6824... prometheus | ts=2024-04-25T10:41:32.397Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml grafana | logger=migrator t=2024-04-25T10:41:34.942177046Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.723863ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | mariadb (172.17.0.3:3306) open zookeeper | ===> User kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 policy-api | policy-db-migrator (172.17.0.8:6824) open prometheus | ts=2024-04-25T10:41:32.397Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-04-25 10:41:31,712 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=migrator t=2024-04-25T10:41:34.947729336Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | Waiting for kafka port 9092... zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml prometheus | ts=2024-04-25T10:41:32.397Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-04-25 10:41:31,783 INFO org.onap.policy.models.simulators starting grafana | logger=migrator t=2024-04-25T10:41:34.949225615Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.496379ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | kafka (172.17.0.6:9092) open zookeeper | ===> Configuring ... kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 policy-api | prometheus | ts=2024-04-25T10:41:32.397Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-04-25 10:41:31,783 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties grafana | logger=migrator t=2024-04-25T10:41:34.95455522Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | Waiting for api port 6969... zookeeper | ===> Running preflight checks ... kafka | controller.socket.timeout.ms = 30000 mariadb | 2024-04-25 10:41:34+00:00 [Note] [Entrypoint]: Initializing database files policy-api | . ____ _ __ _ _ prometheus | ts=2024-04-25T10:41:32.398Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-04-25 10:41:31,952 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION grafana | logger=migrator t=2024-04-25T10:41:34.954841247Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=286.058µs policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | api (172.17.0.9:6969) open zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... kafka | create.topic.policy.class.name = null mariadb | 2024-04-25 10:41:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-04-25T10:41:32.398Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-04-25 10:41:31,954 INFO org.onap.policy.models.simulators starting A&AI simulator grafana | logger=migrator t=2024-04-25T10:41:34.961665779Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... kafka | default.replication.factor = 1 mariadb | 2024-04-25 10:41:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-04-25T10:41:32.403Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! simulator | 2024-04-25 10:41:32,056 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-04-25T10:41:34.962426609Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=760.95µs policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json zookeeper | ===> Launching ... kafka | delegation.token.expiry.check.interval.ms = 3600000 mariadb | 2024-04-25 10:41:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-04-25T10:41:32.403Z caller=main.go:1129 level=info msg="Starting TSDB ..." policy-db-migrator | 321 blocks simulator | 2024-04-25 10:41:32,066 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:34.967949518Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | zookeeper | ===> Launching zookeeper ... kafka | delegation.token.expiry.time.ms = 86400000 mariadb | policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-04-25T10:41:32.405Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 policy-db-migrator | Preparing upgrade release version: 0800 simulator | 2024-04-25 10:41:32,069 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:34.968304707Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=355.939µs policy-apex-pdp | security.providers = null policy-pap | . ____ _ __ _ _ zookeeper | [2024-04-25 10:41:33,746] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | delegation.token.master.key = null mariadb | policy-api | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-04-25T10:41:32.405Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-db-migrator | Preparing upgrade release version: 0900 simulator | 2024-04-25 10:41:32,075 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=migrator t=2024-04-25T10:41:34.97241328Z level=info msg="Executing migration" id="Add created time to annotation table" policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ zookeeper | [2024-04-25 10:41:33,753] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | delegation.token.max.lifetime.ms = 604800000 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! policy-api | :: Spring Boot :: (v3.1.10) prometheus | ts=2024-04-25T10:41:32.409Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-db-migrator | Preparing upgrade release version: 1000 simulator | 2024-04-25 10:41:32,124 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-25T10:41:34.978202327Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.791957ms policy-apex-pdp | session.timeout.ms = 45000 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ zookeeper | [2024-04-25 10:41:33,753] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | delegation.token.secret.key = null mariadb | To do so, start the server, then issue the following command: policy-api | prometheus | ts=2024-04-25T10:41:32.409Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.22µs policy-db-migrator | Preparing upgrade release version: 1100 simulator | 2024-04-25 10:41:32,644 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-25T10:41:34.982978808Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) zookeeper | [2024-04-25 10:41:33,753] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | delete.records.purgatory.purge.interval.requests = 1 mariadb | policy-api | [2024-04-25T10:41:48.682+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final prometheus | ts=2024-04-25T10:41:32.409Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | Preparing upgrade release version: 1200 simulator | 2024-04-25 10:41:32,727 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-25T10:41:34.986009894Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.027256ms policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / zookeeper | [2024-04-25 10:41:33,753] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | delete.topic.enable = true mariadb | '/usr/bin/mysql_secure_installation' policy-api | [2024-04-25T10:41:48.747+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) prometheus | ts=2024-04-25T10:41:32.409Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | Preparing upgrade release version: 1300 simulator | 2024-04-25 10:41:32,735 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} grafana | logger=migrator t=2024-04-25T10:41:34.995044092Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-apex-pdp | ssl.cipher.suites = null policy-pap | =========|_|==============|___/=/_/_/_/ zookeeper | [2024-04-25 10:41:33,755] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) kafka | early.start.listeners = null mariadb | policy-api | [2024-04-25T10:41:48.749+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" prometheus | ts=2024-04-25T10:41:32.409Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=28.591µs wal_replay_duration=776.998µs wbl_replay_duration=210ns total_replay_duration=837.719µs policy-db-migrator | Done simulator | 2024-04-25 10:41:32,741 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1455ms grafana | logger=migrator t=2024-04-25T10:41:34.996890139Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.845957ms policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | :: Spring Boot :: (v3.1.10) zookeeper | [2024-04-25 10:41:33,755] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) kafka | fetch.max.bytes = 57671680 mariadb | which will also give you the option of removing the test policy-api | [2024-04-25T10:41:50.616+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. prometheus | ts=2024-04-25T10:41:32.412Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | name version simulator | 2024-04-25 10:41:32,741 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4327 ms. grafana | logger=migrator t=2024-04-25T10:41:35.000801988Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | zookeeper | [2024-04-25 10:41:33,755] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) kafka | fetch.purgatory.purge.interval.requests = 1000 mariadb | databases and anonymous user created by default. This is policy-api | [2024-04-25T10:41:50.693+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 67 ms. Found 6 JPA repository interfaces. prometheus | ts=2024-04-25T10:41:32.412Z caller=main.go:1153 level=info msg="TSDB started" policy-db-migrator | policyadmin 0 simulator | 2024-04-25 10:41:32,745 INFO org.onap.policy.models.simulators starting SDNC simulator grafana | logger=migrator t=2024-04-25T10:41:35.002281406Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.478147ms policy-apex-pdp | ssl.engine.factory.class = null policy-pap | [2024-04-25T10:42:01.355+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final zookeeper | [2024-04-25 10:41:33,755] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] mariadb | strongly recommended for production servers. policy-api | [2024-04-25T10:41:51.124+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-04-25T10:41:32.412Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 simulator | 2024-04-25 10:41:32,747 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-04-25T10:41:35.007781115Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-apex-pdp | ssl.key.password = null policy-pap | [2024-04-25T10:42:01.425+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 32 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) zookeeper | [2024-04-25 10:41:33,756] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) kafka | group.consumer.heartbeat.interval.ms = 5000 mariadb | policy-api | [2024-04-25T10:41:51.125+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-04-25T10:41:32.413Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.160988ms db_storage=1.61µs remote_storage=2.7µs web_handler=440ns query_engine=950ns scrape=308.437µs scrape_sd=205.245µs notify=29.06µs notify_sd=10.651µs rules=2.27µs tracing=7.33µs policy-db-migrator | upgrade: 0 -> 1300 simulator | 2024-04-25 10:41:32,747 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.008009971Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=229.666µs policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | [2024-04-25T10:42:01.426+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" zookeeper | [2024-04-25 10:41:33,756] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | group.consumer.max.heartbeat.interval.ms = 15000 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb policy-api | [2024-04-25T10:41:51.764+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) prometheus | ts=2024-04-25T10:41:32.413Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." policy-db-migrator | simulator | 2024-04-25 10:41:32,763 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.012794594Z level=info msg="Executing migration" id="Add epoch_end column" policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | [2024-04-25T10:42:03.579+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. zookeeper | [2024-04-25 10:41:33,757] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | group.consumer.max.session.timeout.ms = 60000 mariadb | policy-api | [2024-04-25T10:41:51.775+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] prometheus | ts=2024-04-25T10:41:32.413Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql simulator | 2024-04-25 10:41:32,764 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=migrator t=2024-04-25T10:41:35.016838557Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.043793ms policy-apex-pdp | ssl.keystore.key = null policy-pap | [2024-04-25T10:42:03.679+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 89 ms. Found 7 JPA repository interfaces. zookeeper | [2024-04-25 10:41:33,757] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | group.consumer.max.size = 2147483647 mariadb | Please report any problems at https://mariadb.org/jira policy-api | [2024-04-25T10:41:51.777+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,772 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-25T10:41:35.024365999Z level=info msg="Executing migration" id="Add index for epoch_end" policy-apex-pdp | ssl.keystore.location = null policy-pap | [2024-04-25T10:42:04.193+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler zookeeper | [2024-04-25 10:41:33,757] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | group.consumer.min.heartbeat.interval.ms = 5000 mariadb | policy-api | [2024-04-25T10:41:51.778+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) simulator | 2024-04-25 10:41:32,831 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-25T10:41:35.026171384Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.812555ms policy-apex-pdp | ssl.keystore.password = null policy-pap | [2024-04-25T10:42:04.194+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler zookeeper | [2024-04-25 10:41:33,757] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka | group.consumer.min.session.timeout.ms = 45000 mariadb | The latest information about MariaDB is available at https://mariadb.org/. policy-api | [2024-04-25T10:41:51.875+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,841 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-25T10:41:35.03150166Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-apex-pdp | ssl.keystore.type = JKS policy-pap | [2024-04-25T10:42:04.842+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) zookeeper | [2024-04-25 10:41:33,757] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) kafka | group.consumer.session.timeout.ms = 45000 mariadb | policy-api | [2024-04-25T10:41:51.876+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3058 ms policy-db-migrator | simulator | 2024-04-25 10:41:32,842 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} grafana | logger=migrator t=2024-04-25T10:41:35.031765487Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=263.267µs policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | [2024-04-25T10:42:04.853+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] zookeeper | [2024-04-25 10:41:33,767] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) kafka | group.coordinator.new.enable = false mariadb | Consider joining MariaDB's strong and vibrant community: policy-api | [2024-04-25T10:41:52.334+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-db-migrator | simulator | 2024-04-25 10:41:32,842 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1556ms grafana | logger=migrator t=2024-04-25T10:41:35.037582515Z level=info msg="Executing migration" id="Move region to single row" policy-apex-pdp | ssl.provider = null policy-pap | [2024-04-25T10:42:04.855+00:00|INFO|StandardService|main] Starting service [Tomcat] zookeeper | [2024-04-25 10:41:33,770] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) kafka | group.coordinator.threads = 1 mariadb | https://mariadb.org/get-involved/ policy-api | [2024-04-25T10:41:52.402+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql simulator | 2024-04-25 10:41:32,842 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. grafana | logger=migrator t=2024-04-25T10:41:35.038300424Z level=info msg="Migration successfully executed" id="Move region to single row" duration=720.179µs policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | [2024-04-25T10:42:04.856+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] zookeeper | [2024-04-25 10:41:33,770] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) kafka | group.initial.rebalance.delay.ms = 3000 mariadb | policy-api | [2024-04-25T10:41:52.445+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,843 INFO org.onap.policy.models.simulators starting SO simulator grafana | logger=migrator t=2024-04-25T10:41:35.043875717Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | [2024-04-25T10:42:04.968+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-04-25T10:42:04.968+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3451 ms kafka | group.max.session.timeout.ms = 1800000 mariadb | 2024-04-25 10:41:36+00:00 [Note] [Entrypoint]: Database files initialized policy-api | [2024-04-25T10:41:52.696+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) simulator | 2024-04-25 10:41:32,846 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-04-25T10:41:35.04478547Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=911.893µs policy-apex-pdp | ssl.truststore.certificates = null zookeeper | [2024-04-25 10:41:33,772] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-pap | [2024-04-25T10:42:05.395+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] kafka | group.max.size = 2147483647 mariadb | 2024-04-25 10:41:36+00:00 [Note] [Entrypoint]: Starting temporary server policy-api | [2024-04-25T10:41:52.726+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,846 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.049311145Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-apex-pdp | ssl.truststore.location = null zookeeper | [2024-04-25 10:41:33,786] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:05.453+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final kafka | group.min.session.timeout.ms = 6000 mariadb | 2024-04-25 10:41:36+00:00 [Note] [Entrypoint]: Waiting for server startup policy-api | [2024-04-25T10:41:52.828+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f11f64e policy-db-migrator | simulator | 2024-04-25 10:41:32,847 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.050195788Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=884.643µs policy-apex-pdp | ssl.truststore.password = null zookeeper | [2024-04-25 10:41:33,786] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:05.849+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... kafka | initial.broker.registration.timeout.ms = 60000 mariadb | 2024-04-25 10:41:36 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 94 ... policy-api | [2024-04-25T10:41:52.830+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-db-migrator | simulator | 2024-04-25 10:41:32,847 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=migrator t=2024-04-25T10:41:35.055024531Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-apex-pdp | ssl.truststore.type = JKS zookeeper | [2024-04-25 10:41:33,786] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:05.959+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 kafka | inter.broker.listener.name = PLAINTEXT mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-api | [2024-04-25T10:41:54.849+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql simulator | 2024-04-25 10:41:32,869 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-25T10:41:35.055942054Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=917.083µs policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer zookeeper | [2024-04-25 10:41:33,786] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:05.961+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. kafka | inter.broker.protocol.version = 3.6-IV2 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Number of transaction pools: 1 policy-api | [2024-04-25T10:41:54.853+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,924 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-25T10:41:35.060703496Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-apex-pdp | zookeeper | [2024-04-25 10:41:33,786] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:05.996+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect kafka | kafka.metrics.polling.interval.secs = 10 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-api | [2024-04-25T10:41:55.839+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) simulator | 2024-04-25 10:41:32,936 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-25T10:41:35.062085731Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.382035ms policy-apex-pdp | [2024-04-25T10:42:12.623+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 zookeeper | [2024-04-25 10:41:33,786] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:07.555+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] kafka | kafka.metrics.reporters = [] mariadb | 2024-04-25 10:41:36 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-api | [2024-04-25T10:41:56.672+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,937 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} grafana | logger=migrator t=2024-04-25T10:41:35.067187661Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-apex-pdp | [2024-04-25T10:42:12.623+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 zookeeper | [2024-04-25 10:41:33,786] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:07.568+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' kafka | leader.imbalance.check.interval.seconds = 300 mariadb | 2024-04-25 10:41:36 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-api | [2024-04-25T10:41:57.734+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | simulator | 2024-04-25 10:41:32,937 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1651ms grafana | logger=migrator t=2024-04-25T10:41:35.068518485Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.330174ms policy-apex-pdp | [2024-04-25T10:42:12.623+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041732623 zookeeper | [2024-04-25 10:41:33,786] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:08.140+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository kafka | leader.imbalance.per.broker.percentage = 10 mariadb | 2024-04-25 10:41:36 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-api | [2024-04-25T10:41:57.925+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@42ca6733, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@452d71e5, org.springframework.security.web.context.SecurityContextHolderFilter@328f46f9, org.springframework.security.web.header.HeaderWriterFilter@13acb4, org.springframework.security.web.authentication.logout.LogoutFilter@3c6c7782, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@24209520, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@ec28717, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@79aab764, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@34c9b1dd, org.springframework.security.web.access.ExceptionTranslationFilter@408530d2, org.springframework.security.web.access.intercept.AuthorizationFilter@fffdd40] policy-db-migrator | simulator | 2024-04-25 10:41:32,938 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4909 ms. grafana | logger=migrator t=2024-04-25T10:41:35.073883031Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-apex-pdp | [2024-04-25T10:42:12.624+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Subscribed to topic(s): policy-pdp-pap zookeeper | [2024-04-25 10:41:33,786] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:08.562+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-api | [2024-04-25T10:41:58.721+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql simulator | 2024-04-25 10:41:32,938 INFO org.onap.policy.models.simulators starting VFC simulator grafana | logger=migrator t=2024-04-25T10:41:35.075320879Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.437088ms policy-apex-pdp | [2024-04-25T10:42:12.624+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=58be8b45-fb0e-4d94-8b20-46794cb5e8f5, alive=false, publisher=null]]: starting zookeeper | [2024-04-25 10:41:33,786] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:08.679+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Completed initialization of buffer pool policy-api | [2024-04-25T10:41:58.821+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,941 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-04-25T10:41:35.080907101Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-apex-pdp | [2024-04-25T10:42:12.638+00:00|INFO|ProducerConfig|main] ProducerConfig values: zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-04-25T10:42:08.952+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | log.cleaner.backoff.ms = 15000 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-api | [2024-04-25T10:41:58.865+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) simulator | 2024-04-25 10:41:32,941 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.080980403Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=74.182µs policy-apex-pdp | acks = -1 zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:host.name=344146e5e708 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | allow.auto.create.topics = true kafka | log.cleaner.dedupe.buffer.size = 134217728 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: 128 rollback segments are active. policy-api | [2024-04-25T10:41:58.886+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.889 seconds (process running for 11.479) policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,942 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-25T10:41:35.08634938Z level=info msg="Executing migration" id="create test_data table" policy-apex-pdp | auto.include.jmx.reporter = true zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | auto.commit.interval.ms = 5000 kafka | log.cleaner.delete.retention.ms = 86400000 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-api | [2024-04-25T10:42:17.963+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | simulator | 2024-04-25 10:41:32,943 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=migrator t=2024-04-25T10:41:35.087655093Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.305223ms policy-apex-pdp | batch.size = 16384 zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | auto.include.jmx.reporter = true kafka | log.cleaner.enable = true mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-api | [2024-04-25T10:42:17.963+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | simulator | 2024-04-25 10:41:32,944 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-25T10:41:35.092418175Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-apex-pdp | bootstrap.servers = [kafka:9092] zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | auto.offset.reset = latest kafka | log.cleaner.io.buffer.load.factor = 0.9 mariadb | 2024-04-25 10:41:36 0 [Note] InnoDB: log sequence number 46590; transaction id 14 policy-api | [2024-04-25T10:42:17.965+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql simulator | 2024-04-25 10:41:32,982 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-25T10:41:35.093666597Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.249102ms policy-apex-pdp | buffer.memory = 33554432 policy-pap | bootstrap.servers = [kafka:9092] kafka | log.cleaner.io.buffer.size = 524288 mariadb | 2024-04-25 10:41:36 0 [Note] Plugin 'FEEDBACK' is disabled. policy-api | [2024-04-25T10:42:18.284+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,990 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-25T10:41:35.098884709Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | check.crcs = true kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 mariadb | 2024-04-25 10:41:36 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-api | [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) simulator | 2024-04-25 10:41:32,991 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} grafana | logger=migrator t=2024-04-25T10:41:35.099868325Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=983.556µs policy-apex-pdp | client.id = producer-1 policy-pap | client.dns.lookup = use_all_dns_ips kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 mariadb | 2024-04-25 10:41:36 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | -------------- simulator | 2024-04-25 10:41:32,992 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1705ms policy-apex-pdp | compression.type = none policy-pap | client.id = consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-1 grafana | logger=migrator t=2024-04-25T10:41:35.10556543Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" kafka | log.cleaner.min.cleanable.ratio = 0.5 mariadb | 2024-04-25 10:41:36 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | simulator | 2024-04-25 10:41:32,992 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4950 ms. policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | client.rack = grafana | logger=migrator t=2024-04-25T10:41:35.106840472Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.275022ms kafka | log.cleaner.min.compaction.lag.ms = 0 mariadb | 2024-04-25 10:41:36 0 [Note] mariadbd: ready for connections. policy-db-migrator | simulator | 2024-04-25 10:41:32,993 INFO org.onap.policy.models.simulators started policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T10:41:35.110985479Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" kafka | log.cleaner.threads = 1 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-apex-pdp | enable.idempotence = true policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T10:41:35.111395889Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=409.53µs kafka | log.cleanup.policy = [delete] mariadb | 2024-04-25 10:41:37+00:00 [Note] [Entrypoint]: Temporary server started. policy-db-migrator | -------------- policy-apex-pdp | interceptor.classes = [] policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-25T10:41:35.118289724Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" kafka | log.dir = /tmp/kafka-logs mariadb | 2024-04-25 10:41:39+00:00 [Note] [Entrypoint]: Creating user policy_user policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-25T10:41:35.119117456Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=827.272µs kafka | log.dirs = /var/lib/kafka/data mariadb | 2024-04-25 10:41:39+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) policy-db-migrator | -------------- policy-apex-pdp | linger.ms = 0 policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-25T10:41:35.123846386Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" kafka | log.flush.interval.messages = 9223372036854775807 mariadb | policy-db-migrator | policy-apex-pdp | max.block.ms = 60000 policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T10:41:35.12401391Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=167.434µs kafka | log.flush.interval.ms = null mariadb | policy-db-migrator | policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-25T10:41:35.129312206Z level=info msg="Executing migration" id="create team table" kafka | log.flush.offset.checkpoint.interval.ms = 60000 mariadb | 2024-04-25 10:41:39+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-apex-pdp | max.request.size = 1048576 policy-pap | group.id = ae8023b6-4521-455f-bfa2-c4d8e9909c4a grafana | logger=migrator t=2024-04-25T10:41:35.130118847Z level=info msg="Migration successfully executed" id="create team table" duration=805.631µs kafka | log.flush.scheduler.interval.ms = 9223372036854775807 mariadb | 2024-04-25 10:41:39+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh policy-db-migrator | -------------- policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-25T10:41:35.168408783Z level=info msg="Executing migration" id="add index team.org_id" kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 mariadb | #!/bin/bash -xv policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | metadata.max.idle.ms = 300000 policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-25T10:41:35.171704757Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=3.291614ms zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) kafka | log.index.interval.bytes = 4096 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T10:41:35.178553142Z level=info msg="Executing migration" id="add unique index team_org_id_name" zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) kafka | log.index.size.max.bytes = 10485760 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. policy-db-migrator | policy-apex-pdp | metrics.num.samples = 2 policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-25T10:41:35.179804414Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.251302ms kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 mariadb | # policy-db-migrator | policy-apex-pdp | metrics.recording.level = INFO policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-25T10:41:35.185654313Z level=info msg="Executing migration" id="Add column uid in team" kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-25T10:41:35.190749733Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.09412ms kafka | log.message.timestamp.after.max.ms = 9223372036854775807 zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) mariadb | # you may not use this file except in compliance with the License. policy-db-migrator | -------------- policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-25T10:41:35.196086349Z level=info msg="Executing migration" id="Update uid column values in team" kafka | log.message.timestamp.before.max.ms = 9223372036854775807 zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) mariadb | # You may obtain a copy of the License at mariadb | # policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-25T10:41:35.196278284Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=191.395µs zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) mariadb | # http://www.apache.org/licenses/LICENSE-2.0 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-apex-pdp | partitioner.class = null policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-25T10:41:35.202290977Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- mariadb | # kafka | log.message.timestamp.type = CreateTime policy-apex-pdp | partitioner.ignore.keys = false policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-04-25T10:41:35.203417327Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.127729ms zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | mariadb | # Unless required by applicable law or agreed to in writing, software kafka | log.preallocate = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T10:41:35.208530607Z level=info msg="Executing migration" id="create team member table" zookeeper | [2024-04-25 10:41:33,788] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | mariadb | # distributed under the License is distributed on an "AS IS" BASIS, kafka | log.retention.bytes = -1 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T10:41:35.209932852Z level=info msg="Migration successfully executed" id="create team member table" duration=1.401515ms zookeeper | [2024-04-25 10:41:33,789] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. kafka | log.retention.check.interval.ms = 300000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T10:41:35.214892379Z level=info msg="Executing migration" id="add index team_member.org_id" zookeeper | [2024-04-25 10:41:33,789] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- mariadb | # See the License for the specific language governing permissions and kafka | log.retention.hours = 168 policy-apex-pdp | request.timeout.ms = 30000 policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T10:41:35.216028278Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.135789ms zookeeper | [2024-04-25 10:41:33,789] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | # limitations under the License. kafka | log.retention.minutes = null policy-apex-pdp | retries = 2147483647 policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T10:41:35.221876897Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" zookeeper | [2024-04-25 10:41:33,789] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- mariadb | kafka | log.retention.ms = null policy-apex-pdp | retry.backoff.ms = 100 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-25T10:41:35.22314599Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.270923ms zookeeper | [2024-04-25 10:41:33,789] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do grafana | logger=migrator t=2024-04-25T10:41:35.230546708Z level=info msg="Executing migration" id="add index team_member.team_id" zookeeper | [2024-04-25 10:41:33,789] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | kafka | log.roll.hours = 168 policy-apex-pdp | sasl.client.callback.handler.class = null mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" grafana | logger=migrator t=2024-04-25T10:41:35.231602425Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.055887ms zookeeper | [2024-04-25 10:41:33,789] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql kafka | log.roll.jitter.hours = 0 policy-apex-pdp | sasl.jaas.config = null mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp grafana | logger=migrator t=2024-04-25T10:41:35.238113301Z level=info msg="Executing migration" id="Add column email to team table" zookeeper | [2024-04-25 10:41:33,789] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- kafka | log.roll.jitter.ms = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' grafana | logger=migrator t=2024-04-25T10:41:35.242655647Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.542166ms zookeeper | [2024-04-25 10:41:33,790] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.roll.ms = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' grafana | logger=migrator t=2024-04-25T10:41:35.248573198Z level=info msg="Executing migration" id="Add column external to team_member table" zookeeper | [2024-04-25 10:41:33,790] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- kafka | log.segment.bytes = 1073741824 policy-apex-pdp | sasl.kerberos.service.name = null mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp grafana | logger=migrator t=2024-04-25T10:41:35.253303918Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.7301ms zookeeper | [2024-04-25 10:41:33,790] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | kafka | log.segment.delete.delay.ms = 60000 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' grafana | logger=migrator t=2024-04-25T10:41:35.260234136Z level=info msg="Executing migration" id="Add column permission to team_member table" zookeeper | [2024-04-25 10:41:33,790] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | kafka | max.connection.creation.rate = 2147483647 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' grafana | logger=migrator t=2024-04-25T10:41:35.264770022Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.533886ms zookeeper | [2024-04-25 10:41:33,791] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql kafka | max.connections = 2147483647 policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | receive.buffer.bytes = 65536 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' grafana | logger=migrator t=2024-04-25T10:41:35.268756173Z level=info msg="Executing migration" id="create dashboard acl table" zookeeper | [2024-04-25 10:41:33,792] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- kafka | max.connections.per.ip = 2147483647 policy-apex-pdp | sasl.login.class = null policy-pap | reconnect.backoff.max.ms = 1000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp grafana | logger=migrator t=2024-04-25T10:41:35.269780289Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.022826ms zookeeper | [2024-04-25 10:41:33,792] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | max.connections.per.ip.overrides = policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | reconnect.backoff.ms = 50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' grafana | logger=migrator t=2024-04-25T10:41:35.277008254Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" zookeeper | [2024-04-25 10:41:33,793] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | -------------- kafka | max.incremental.fetch.session.cache.slots = 1000 policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | request.timeout.ms = 30000 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' grafana | logger=migrator t=2024-04-25T10:41:35.27844125Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.433097ms zookeeper | [2024-04-25 10:41:33,793] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-db-migrator | kafka | message.max.bytes = 1048588 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | retry.backoff.ms = 100 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp grafana | logger=migrator t=2024-04-25T10:41:35.284209968Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | kafka | metadata.log.dir = null policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.client.callback.handler.class = null mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' grafana | logger=migrator t=2024-04-25T10:41:35.285809298Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.59966ms zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.jaas.config = null mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' grafana | logger=migrator t=2024-04-25T10:41:35.290050127Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit mariadb | grafana | logger=migrator t=2024-04-25T10:41:35.291560535Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.510437ms zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | metadata.log.segment.bytes = 1073741824 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" grafana | logger=migrator t=2024-04-25T10:41:35.299410525Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- kafka | metadata.log.segment.min.bytes = 8388608 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.kerberos.service.name = null mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' grafana | logger=migrator t=2024-04-25T10:41:35.300350539Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=940.224µs zookeeper | [2024-04-25 10:41:33,794] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | kafka | metadata.log.segment.ms = 604800000 policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql grafana | logger=migrator t=2024-04-25T10:41:35.305402558Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" zookeeper | [2024-04-25 10:41:33,797] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | kafka | metadata.max.idle.interval.ms = 500 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp grafana | logger=migrator t=2024-04-25T10:41:35.306884096Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.482068ms zookeeper | [2024-04-25 10:41:33,797] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql kafka | metadata.max.retention.bytes = 104857600 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | sasl.login.callback.handler.class = null mariadb | grafana | logger=migrator t=2024-04-25T10:41:35.313702259Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" zookeeper | [2024-04-25 10:41:33,797] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | -------------- kafka | metadata.max.retention.ms = 604800000 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null mariadb | 2024-04-25 10:41:39+00:00 [Note] [Entrypoint]: Stopping temporary server zookeeper | [2024-04-25 10:41:33,797] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | metric.reporters = [] policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T10:41:35.315172627Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.470488ms policy-pap | sasl.login.class = null mariadb | 2024-04-25 10:41:39 0 [Note] mariadbd (initiated by: unknown): Normal shutdown zookeeper | [2024-04-25 10:41:33,797] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- kafka | metrics.num.samples = 2 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:35.321993641Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null zookeeper | [2024-04-25 10:41:33,817] INFO Logging initialized @532ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-db-migrator | kafka | metrics.recording.level = INFO policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T10:41:35.32348376Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.489898ms policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 zookeeper | [2024-04-25 10:41:33,901] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | kafka | metrics.sample.window.ms = 30000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-25T10:41:35.32822084Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" mariadb | 2024-04-25 10:41:39 0 [Note] InnoDB: FTS optimize thread exiting. policy-pap | sasl.login.refresh.window.factor = 0.8 zookeeper | [2024-04-25 10:41:33,901] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql kafka | min.insync.replicas = 1 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-25T10:41:35.329044751Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=823.091µs mariadb | 2024-04-25 10:41:39 0 [Note] InnoDB: Starting shutdown... policy-pap | sasl.login.refresh.window.jitter = 0.05 zookeeper | [2024-04-25 10:41:33,920] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) policy-db-migrator | -------------- kafka | node.id = 1 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub mariadb | 2024-04-25 10:41:39 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:35.33411174Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" zookeeper | [2024-04-25 10:41:33,949] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | num.io.threads = 8 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null mariadb | 2024-04-25 10:41:39 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 10:41:39 policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T10:41:35.334396567Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=284.177µs zookeeper | [2024-04-25 10:41:33,949] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-db-migrator | -------------- kafka | num.network.threads = 3 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T10:41:35.337803995Z level=info msg="Executing migration" id="create tag table" zookeeper | [2024-04-25 10:41:33,950] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) policy-db-migrator | kafka | num.partitions = 1 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Shutdown completed; log sequence number 330657; transaction id 298 policy-apex-pdp | security.providers = null policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T10:41:35.338991975Z level=info msg="Migration successfully executed" id="create tag table" duration=1.187119ms zookeeper | [2024-04-25 10:41:33,953] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-db-migrator | kafka | num.recovery.threads.per.data.dir = 1 mariadb | 2024-04-25 10:41:40 0 [Note] mariadbd: Shutdown complete policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T10:41:35.343635053Z level=info msg="Executing migration" id="add index tag.key_value" zookeeper | [2024-04-25 10:41:33,960] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql kafka | num.replica.alter.log.dirs.threads = null mariadb | policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T10:41:35.345138381Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.505668ms zookeeper | [2024-04-25 10:41:33,974] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- kafka | num.replica.fetchers = 1 mariadb | 2024-04-25 10:41:40+00:00 [Note] [Entrypoint]: Temporary server stopped policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:35.350750864Z level=info msg="Executing migration" id="create login attempt table" zookeeper | [2024-04-25 10:41:33,974] INFO Started @689ms (org.eclipse.jetty.server.Server) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | offset.metadata.max.bytes = 4096 mariadb | policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T10:41:35.351524075Z level=info msg="Migration successfully executed" id="create login attempt table" duration=772.691µs zookeeper | [2024-04-25 10:41:33,975] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | offsets.commit.required.acks = -1 mariadb | 2024-04-25 10:41:40+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T10:41:35.355203298Z level=info msg="Executing migration" id="add index login_attempt.username" zookeeper | [2024-04-25 10:41:33,980] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | offsets.commit.timeout.ms = 5000 mariadb | policy-apex-pdp | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T10:41:35.356678415Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.474827ms zookeeper | [2024-04-25 10:41:33,981] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | offsets.load.buffer.size = 5242880 mariadb | 2024-04-25 10:41:40 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T10:41:35.361435457Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" zookeeper | [2024-04-25 10:41:33,982] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql kafka | offsets.retention.check.interval.ms = 600000 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-apex-pdp | ssl.key.password = null grafana | logger=migrator t=2024-04-25T10:41:35.362828513Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.392786ms zookeeper | [2024-04-25 10:41:33,984] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- kafka | offsets.retention.minutes = 10080 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Number of transaction pools: 1 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T10:41:35.367354448Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" zookeeper | [2024-04-25 10:41:33,998] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | offsets.topic.compression.codec = 0 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T10:41:35.381858418Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.50385ms zookeeper | [2024-04-25 10:41:33,998] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- kafka | offsets.topic.num.partitions = 50 mariadb | 2024-04-25 10:41:40 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T10:41:35.38663429Z level=info msg="Executing migration" id="create login_attempt v2" zookeeper | [2024-04-25 10:41:33,999] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) policy-pap | security.providers = null policy-db-migrator | kafka | offsets.topic.replication.factor = 1 mariadb | 2024-04-25 10:41:40 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T10:41:35.387235355Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=599.925µs zookeeper | [2024-04-25 10:41:33,999] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | offsets.topic.segment.bytes = 104857600 mariadb | 2024-04-25 10:41:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T10:41:35.39366032Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" zookeeper | [2024-04-25 10:41:34,003] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2024-04-25 10:41:34,003] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:35.395163967Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.505948ms zookeeper | [2024-04-25 10:41:34,006] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 kafka | password.encoder.iterations = 4096 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Completed initialization of buffer pool policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T10:41:35.40036283Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" zookeeper | [2024-04-25 10:41:34,007] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | password.encoder.key.length = 128 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-04-25T10:41:35.400912114Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=548.594µs zookeeper | [2024-04-25 10:41:34,007] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | password.encoder.keyfactory.algorithm = null mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: 128 rollback segments are active. policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T10:41:35.405862751Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" zookeeper | [2024-04-25 10:41:34,018] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-db-migrator | policy-pap | ssl.cipher.suites = null kafka | password.encoder.old.secret = null mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T10:41:35.406914717Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.050617ms zookeeper | [2024-04-25 10:41:34,017] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | password.encoder.secret = null mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T10:41:35.413111016Z level=info msg="Executing migration" id="create user auth table" zookeeper | [2024-04-25 10:41:34,033] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-pap | ssl.endpoint.identification.algorithm = https kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: log sequence number 330657; transaction id 299 policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T10:41:35.414366117Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.254291ms zookeeper | [2024-04-25 10:41:34,033] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | process.roles = [] mariadb | 2024-04-25 10:41:40 0 [Note] Plugin 'FEEDBACK' is disabled. policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T10:41:35.42076236Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" zookeeper | [2024-04-25 10:41:39,304] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.key.password = null kafka | producer.id.expiration.check.interval.ms = 600000 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:35.421718845Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=956.415µs policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | producer.id.expiration.ms = 86400000 mariadb | 2024-04-25 10:41:40 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-apex-pdp | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T10:41:35.428102458Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-04-25T10:41:35.42820213Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=98.232µs policy-pap | ssl.keystore.certificate.chain = null kafka | producer.purgatory.purge.interval.requests = 1000 mariadb | 2024-04-25 10:41:40 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-apex-pdp | transactional.id = null grafana | logger=migrator t=2024-04-25T10:41:35.433879835Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.44188991Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.012285ms policy-pap | ssl.keystore.key = null kafka | queued.max.request.bytes = -1 mariadb | 2024-04-25 10:41:40 0 [Note] Server socket created on IP: '0.0.0.0'. policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-25T10:41:35.447901423Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.455566139Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=7.662236ms policy-pap | ssl.keystore.location = null kafka | queued.max.requests = 500 mariadb | 2024-04-25 10:41:40 0 [Note] Server socket created on IP: '::'. policy-apex-pdp | grafana | logger=migrator t=2024-04-25T10:41:35.461294755Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.464852895Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.55716ms policy-pap | ssl.keystore.password = null kafka | quota.window.num = 11 mariadb | 2024-04-25 10:41:40 0 [Note] mariadbd: ready for connections. policy-apex-pdp | [2024-04-25T10:42:12.649+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-25T10:41:35.469170656Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.474508681Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.337385ms policy-pap | ssl.keystore.type = JKS kafka | quota.window.size.seconds = 1 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-apex-pdp | [2024-04-25T10:42:12.666+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-25T10:41:35.479770866Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.480731871Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=961.086µs policy-pap | ssl.protocol = TLSv1.3 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 mariadb | 2024-04-25 10:41:40 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 10:41:40 policy-apex-pdp | [2024-04-25T10:42:12.667+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T10:41:35.486833746Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-04-25T10:41:35.49482511Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.990264ms kafka | remote.log.manager.task.interval.ms = 30000 mariadb | 2024-04-25 10:41:41 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-apex-pdp | [2024-04-25T10:42:12.667+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041732666 grafana | logger=migrator t=2024-04-25T10:41:35.542379543Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-04-25T10:41:35.543763888Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.383585ms policy-pap | ssl.provider = null kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 mariadb | 2024-04-25 10:41:41 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | [2024-04-25T10:42:12.668+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=58be8b45-fb0e-4d94-8b20-46794cb5e8f5, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-25T10:41:35.551085035Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX mariadb | 2024-04-25 10:41:41 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) policy-apex-pdp | [2024-04-25T10:42:12.668+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-04-25T10:41:35.552624824Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.538009ms grafana | logger=migrator t=2024-04-25T10:41:35.558377701Z level=info msg="Executing migration" id="create user auth token table" policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null mariadb | 2024-04-25 10:41:41 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) policy-apex-pdp | [2024-04-25T10:42:12.668+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-db-migrator | policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:35.559824808Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.446437ms policy-apex-pdp | [2024-04-25T10:42:12.670+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | grafana | logger=migrator t=2024-04-25T10:41:35.566269052Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-apex-pdp | [2024-04-25T10:42:12.670+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:09.123+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-04-25T10:42:09.124+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-25T10:41:35.567799992Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.5305ms kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-pap | [2024-04-25T10:42:09.124+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041729122 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T10:42:12.672+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener kafka | remote.log.manager.task.retry.jitter = 0.2 grafana | logger=migrator t=2024-04-25T10:41:35.571435184Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-db-migrator | policy-apex-pdp | [2024-04-25T10:42:12.673+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-pap | [2024-04-25T10:42:09.126+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-1, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Subscribed to topic(s): policy-pdp-pap kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | policy-apex-pdp | [2024-04-25T10:42:12.673+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-pap | [2024-04-25T10:42:09.127+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-25T10:41:35.572400248Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=963.324µs kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-apex-pdp | [2024-04-25T10:42:12.673+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=76090dad-2cb8-4045-86c4-b86ef46522aa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-25T10:41:35.576975335Z level=info msg="Executing migration" id="add index user_auth_token.user_id" kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-db-migrator | -------------- policy-apex-pdp | [2024-04-25T10:42:12.674+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=76090dad-2cb8-4045-86c4-b86ef46522aa, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-25T10:41:35.578392682Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.415907ms kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-04-25T10:42:12.674+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.584615031Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-apex-pdp | [2024-04-25T10:42:12.686+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-pap | auto.offset.reset = latest kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.593153438Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.537647ms policy-apex-pdp | [] policy-pap | bootstrap.servers = [kafka:9092] kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.599929231Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-apex-pdp | [2024-04-25T10:42:12.689+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | check.crcs = true kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-25T10:41:35.600982508Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.052767ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"aa144a7e-52ae-494b-b63a-e594dbe15484","timestampMs":1714041732673,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup"} policy-pap | client.dns.lookup = use_all_dns_ips kafka | remote.log.reader.threads = 10 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.606217911Z level=info msg="Executing migration" id="create cache_data table" policy-apex-pdp | [2024-04-25T10:42:12.854+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-pap | client.id = consumer-policy-pap-2 kafka | remote.log.storage.manager.class.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.607763051Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.5446ms policy-apex-pdp | [2024-04-25T10:42:12.855+00:00|INFO|ServiceManager|main] service manager starting policy-pap | client.rack = kafka | remote.log.storage.manager.class.path = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.614097152Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-apex-pdp | [2024-04-25T10:42:12.855+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-pap | connections.max.idle.ms = 540000 kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.615097018Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=999.896µs policy-apex-pdp | [2024-04-25T10:42:12.855+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | default.api.timeout.ms = 60000 kafka | remote.log.storage.system.enable = false policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.618998617Z level=info msg="Executing migration" id="create short_url table v1" policy-apex-pdp | [2024-04-25T10:42:12.865+00:00|INFO|ServiceManager|main] service manager started policy-pap | enable.auto.commit = true kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql grafana | logger=migrator t=2024-04-25T10:41:35.620421444Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.422927ms policy-apex-pdp | [2024-04-25T10:42:12.866+00:00|INFO|ServiceManager|main] service manager started policy-pap | exclude.internal.topics = true kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.627205067Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-apex-pdp | [2024-04-25T10:42:12.866+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-pap | fetch.max.bytes = 52428800 kafka | replica.fetch.min.bytes = 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.629213978Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.008301ms policy-apex-pdp | [2024-04-25T10:42:12.866+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | fetch.max.wait.ms = 500 kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.634593356Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-apex-pdp | [2024-04-25T10:42:13.000+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-pap | fetch.min.bytes = 1 kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.634671278Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=79.782µs policy-apex-pdp | [2024-04-25T10:42:13.000+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-pap | group.id = policy-pap kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.639649254Z level=info msg="Executing migration" id="delete alert_definition table" policy-apex-pdp | [2024-04-25T10:42:13.002+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-pap | group.instance.id = null policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql grafana | logger=migrator t=2024-04-25T10:41:35.639730796Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=79.522µs kafka | replica.lag.time.max.ms = 30000 policy-apex-pdp | [2024-04-25T10:42:13.002+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.646402247Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | replica.selector.class = null policy-apex-pdp | [2024-04-25T10:42:13.012+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] (Re-)joining group policy-pap | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.647872544Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.469517ms kafka | replica.socket.receive.buffer.bytes = 65536 policy-apex-pdp | [2024-04-25T10:42:13.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Request joining group due to: need to re-join with the given member-id: consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7 policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.652965314Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | replica.socket.timeout.ms = 30000 policy-apex-pdp | [2024-04-25T10:42:13.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.654570615Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.605431ms kafka | replication.quota.window.num = 11 policy-apex-pdp | [2024-04-25T10:42:13.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] (Re-)joining group policy-pap | isolation.level = read_uncommitted policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.660942467Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | replication.quota.window.size.seconds = 1 policy-apex-pdp | [2024-04-25T10:42:13.479+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-25T10:41:35.662024325Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.081968ms kafka | request.timeout.ms = 30000 policy-apex-pdp | [2024-04-25T10:42:13.480+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.66886338Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | reserved.broker.max.id = 1000 policy-apex-pdp | [2024-04-25T10:42:16.039+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Successfully joined group with generation Generation{generationId=1, memberId='consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7', protocol='range'} policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.668956042Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=92.932µs kafka | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-04-25T10:42:16.047+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Finished assignment for group at generation 1: {consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | max.poll.records = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.674734129Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | sasl.enabled.mechanisms = [GSSAPI] policy-apex-pdp | [2024-04-25T10:42:16.070+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Successfully synced group in generation Generation{generationId=1, memberId='consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7', protocol='range'} policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.676286718Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.552709ms kafka | sasl.jaas.config = null policy-apex-pdp | [2024-04-25T10:42:16.072+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.680950838Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | [2024-04-25T10:42:16.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | metrics.num.samples = 2 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-04-25T10:41:35.682383465Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.432437ms kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-04-25T10:42:16.089+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Found no committed offset for partition policy-pdp-pap-0 policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.689203338Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-apex-pdp | [2024-04-25T10:42:16.106+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2, groupId=76090dad-2cb8-4045-86c4-b86ef46522aa] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.690947763Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.744295ms kafka | sasl.kerberos.service.name = null policy-apex-pdp | [2024-04-25T10:42:32.673+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.696042093Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0fa1495a-5b77-4869-a732-547adcbc362c","timestampMs":1714041752672,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup"} policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.697170942Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.128099ms kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | [2024-04-25T10:42:32.699+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.701185614Z level=info msg="Executing migration" id="Add column paused in alert_definition" kafka | sasl.login.callback.handler.class = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0fa1495a-5b77-4869-a732-547adcbc362c","timestampMs":1714041752672,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup"} policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql grafana | logger=migrator t=2024-04-25T10:41:35.7069081Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.723036ms kafka | sasl.login.class = null policy-apex-pdp | [2024-04-25T10:42:32.703+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.713273123Z level=info msg="Executing migration" id="drop alert_definition table" kafka | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-04-25T10:42:32.866+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.714272408Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=998.295µs kafka | sasl.login.read.timeout.ms = null policy-apex-pdp | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6ebfae58-4293-4386-b05b-c7000ecfd79f","timestampMs":1714041752796,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.719974853Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-04-25T10:42:32.878+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-pap | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.720082246Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=107.673µs kafka | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-04-25T10:42:32.880+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.724798667Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6ebfae58-4293-4386-b05b-c7000ecfd79f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c6c6ff3e-5ca5-4702-be12-6a402ad60812","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql grafana | logger=migrator t=2024-04-25T10:41:35.726330495Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.531288ms kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-04-25T10:42:32.880+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.731737463Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a220bbc8-0a2a-4e9c-95c6-992c77aee261","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.733449557Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.712224ms kafka | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | [2024-04-25T10:42:32.893+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.739198814Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | sasl.mechanism.controller.protocol = GSSAPI policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6ebfae58-4293-4386-b05b-c7000ecfd79f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c6c6ff3e-5ca5-4702-be12-6a402ad60812","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.740952029Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.752235ms kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-apex-pdp | [2024-04-25T10:42:32.893+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:35.748275045Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | [2024-04-25T10:42:32.893+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql grafana | logger=migrator t=2024-04-25T10:41:35.748340497Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=65.782µs kafka | sasl.oauthbearer.expected.audience = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a220bbc8-0a2a-4e9c-95c6-992c77aee261","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.753321694Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | [2024-04-25T10:42:32.894+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-25T10:41:35.754724899Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.401855ms kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | [2024-04-25T10:42:32.910+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:35.760816126Z level=info msg="Executing migration" id="create alert_instance table" kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c4d7f70b-26e2-41cb-907b-c7984f55b821","timestampMs":1714041752797,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | [2024-04-25T10:42:32.913+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T10:41:35.762416336Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.596171ms policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c4d7f70b-26e2-41cb-907b-c7984f55b821","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"4ea67bf2-9839-4197-ab46-ed6c3acb2c53","timestampMs":1714041752912,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T10:41:35.768557022Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql kafka | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | [2024-04-25T10:42:32.923+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T10:41:35.770281487Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.723675ms policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c4d7f70b-26e2-41cb-907b-c7984f55b821","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"4ea67bf2-9839-4197-ab46-ed6c3acb2c53","timestampMs":1714041752912,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T10:41:35.775582382Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | [2024-04-25T10:42:32.924+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-25T10:41:35.776921436Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.339664ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | sasl.server.callback.handler.class = null policy-apex-pdp | [2024-04-25T10:42:32.969+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T10:41:35.782647552Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | sasl.server.max.receive.size = 524288 policy-apex-pdp | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5e9e39dd-7384-4924-84ba-e31746826a81","timestampMs":1714041752933,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T10:41:35.791300743Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.650961ms policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | kafka | security.inter.broker.protocol = PLAINTEXT policy-apex-pdp | [2024-04-25T10:42:32.973+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T10:41:35.798614829Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql kafka | security.providers = null policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5e9e39dd-7384-4924-84ba-e31746826a81","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e2d900bb-8aba-4cb6-9a8b-2eab60d13d50","timestampMs":1714041752973,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T10:41:35.799351678Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=736.409µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | server.max.startup.time.ms = 9223372036854775807 policy-apex-pdp | [2024-04-25T10:42:32.982+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-25T10:41:35.804805677Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5e9e39dd-7384-4924-84ba-e31746826a81","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e2d900bb-8aba-4cb6-9a8b-2eab60d13d50","timestampMs":1714041752973,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-25T10:41:35.806674165Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.870718ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-04-25T10:42:32.983+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-25T10:41:35.813337085Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | socket.listen.backlog.size = 50 policy-apex-pdp | [2024-04-25T10:42:56.166+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.5 - policyadmin [25/Apr/2024:10:42:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-25T10:41:35.84059952Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.263396ms policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | kafka | socket.receive.buffer.bytes = 102400 policy-apex-pdp | [2024-04-25T10:43:56.083+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.5 - policyadmin [25/Apr/2024:10:43:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-25T10:41:35.872131104Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql kafka | socket.request.max.bytes = 104857600 grafana | logger=migrator t=2024-04-25T10:41:35.899147834Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.01707ms policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-04-25T10:41:35.902386746Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | security.providers = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-04-25T10:41:35.903207097Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=820.231µs policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- kafka | ssl.client.auth = none grafana | logger=migrator t=2024-04-25T10:41:35.908303877Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | session.timeout.ms = 45000 policy-db-migrator | kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T10:41:35.91036022Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=2.054643ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T10:41:35.916296701Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql kafka | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T10:41:35.922503859Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.207558ms policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | ssl.key.password = null grafana | logger=migrator t=2024-04-25T10:41:35.929051117Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T10:41:35.934947747Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.89557ms policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- kafka | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T10:41:35.941480114Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | ssl.engine.factory.class = null policy-db-migrator | kafka | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T10:41:35.942553432Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.072818ms policy-pap | ssl.key.password = null policy-db-migrator | kafka | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T10:41:35.947924748Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T10:41:35.949676813Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.746775ms policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:35.955797039Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | ssl.principal.mapping.rules = DEFAULT grafana | logger=migrator t=2024-04-25T10:41:35.956913118Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.115789ms policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- kafka | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T10:41:35.963582848Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | ssl.provider = null grafana | logger=migrator t=2024-04-25T10:41:35.965584079Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.000961ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T10:41:35.971324975Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql kafka | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T10:41:35.971412888Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=88.833µs policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T10:41:35.977124573Z level=info msg="Executing migration" id="add column for to alert_rule" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T10:41:35.986462681Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.336148ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-04-25T10:41:35.992896926Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:35.997133444Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.237477ms policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:36.002642324Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-04-25T10:41:36.008555158Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.912284ms policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | transaction.partition.verification.enable = true grafana | logger=migrator t=2024-04-25T10:41:36.014500438Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 grafana | logger=migrator t=2024-04-25T10:41:36.015522025Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.021298ms policy-pap | policy-db-migrator | -------------- kafka | transaction.state.log.load.buffer.size = 5242880 grafana | logger=migrator t=2024-04-25T10:41:36.022277554Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-pap | [2024-04-25T10:42:09.133+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | kafka | transaction.state.log.min.isr = 2 grafana | logger=migrator t=2024-04-25T10:41:36.023966659Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.688206ms policy-pap | [2024-04-25T10:42:09.133+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | kafka | transaction.state.log.num.partitions = 50 grafana | logger=migrator t=2024-04-25T10:41:36.0289445Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-pap | [2024-04-25T10:42:09.133+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041729133 policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | transaction.state.log.replication.factor = 3 grafana | logger=migrator t=2024-04-25T10:41:36.036762457Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.815677ms policy-pap | [2024-04-25T10:42:09.133+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- kafka | transaction.state.log.segment.bytes = 104857600 grafana | logger=migrator t=2024-04-25T10:41:36.041182594Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | [2024-04-25T10:42:09.464+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-04-25T10:41:36.04709382Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.910426ms policy-db-migrator | -------------- kafka | unclean.leader.election.enable = false policy-pap | [2024-04-25T10:42:09.614+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.054162447Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | unstable.api.versions.enable = false policy-pap | [2024-04-25T10:42:09.831+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@3b1137b0, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@21ba0d33, org.springframework.security.web.context.SecurityContextHolderFilter@3c20e9d6, org.springframework.security.web.header.HeaderWriterFilter@4c9d833, org.springframework.security.web.authentication.logout.LogoutFilter@8c18bde, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6d9ee75a, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@42805abe, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@152035eb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@afb7b03, org.springframework.security.web.access.ExceptionTranslationFilter@7836c79, org.springframework.security.web.access.intercept.AuthorizationFilter@78ea700f] policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.055303028Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.140461ms kafka | zookeeper.clientCnxnSocket = null policy-pap | [2024-04-25T10:42:10.628+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | > upgrade 0460-pdppolicystatus.sql grafana | logger=migrator t=2024-04-25T10:41:36.059022836Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | zookeeper.connect = zookeeper:2181 policy-pap | [2024-04-25T10:42:10.734+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.069781451Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.758705ms kafka | zookeeper.connection.timeout.ms = null policy-pap | [2024-04-25T10:42:10.761+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T10:41:36.075445431Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | zookeeper.max.in.flight.requests = 10 policy-pap | [2024-04-25T10:42:10.778+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.08109802Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.650919ms kafka | zookeeper.metadata.migration.enable = false policy-pap | [2024-04-25T10:42:10.778+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.084540321Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | zookeeper.metadata.migration.min.batch.size = 200 policy-pap | [2024-04-25T10:42:10.778+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.084779597Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=238.416µs kafka | zookeeper.session.timeout.ms = 18000 policy-pap | [2024-04-25T10:42:10.779+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | > upgrade 0470-pdp.sql grafana | logger=migrator t=2024-04-25T10:41:36.093952421Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | zookeeper.set.acl = false policy-pap | [2024-04-25T10:42:10.779+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.095202373Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.245322ms kafka | zookeeper.ssl.cipher.suites = null policy-pap | [2024-04-25T10:42:10.780+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T10:41:36.101540121Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | zookeeper.ssl.client.enable = false policy-pap | [2024-04-25T10:42:10.780+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.103530044Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.989083ms kafka | zookeeper.ssl.crl.enable = false policy-pap | [2024-04-25T10:42:10.782+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ae8023b6-4521-455f-bfa2-c4d8e9909c4a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4271b748 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.108264379Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | zookeeper.ssl.enabled.protocols = null policy-pap | [2024-04-25T10:42:10.795+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ae8023b6-4521-455f-bfa2-c4d8e9909c4a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.109524132Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.260833ms kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-pap | [2024-04-25T10:42:10.795+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | > upgrade 0480-pdpstatistics.sql grafana | logger=migrator t=2024-04-25T10:41:36.116777535Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | zookeeper.ssl.keystore.location = null policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.117101253Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=324.599µs kafka | zookeeper.ssl.keystore.password = null policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.122316001Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | zookeeper.ssl.keystore.type = null policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.132615473Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.299752ms kafka | zookeeper.ssl.ocsp.enable = false policy-pap | auto.offset.reset = latest policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.138146609Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | zookeeper.ssl.protocol = TLSv1.2 policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.143172113Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.023894ms kafka | zookeeper.ssl.truststore.location = null policy-pap | check.crcs = true policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql grafana | logger=migrator t=2024-04-25T10:41:36.149047428Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | zookeeper.ssl.truststore.password = null policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.15551311Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.464282ms kafka | zookeeper.ssl.truststore.type = null policy-pap | client.id = consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3 grafana | logger=migrator t=2024-04-25T10:41:36.160902272Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | (kafka.server.KafkaConfig) policy-pap | client.rack = grafana | logger=migrator t=2024-04-25T10:41:36.167492697Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.590075ms policy-db-migrator | -------------- kafka | [2024-04-25 10:41:40,921] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T10:41:36.171625875Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | kafka | [2024-04-25 10:41:40,924] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-25T10:41:36.177976394Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.349709ms policy-db-migrator | kafka | [2024-04-25 10:41:40,925] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-25T10:41:36.183571572Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | [2024-04-25 10:41:40,927] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-25T10:41:36.183799938Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=228.946µs policy-db-migrator | -------------- kafka | [2024-04-25 10:41:40,953] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-25T10:41:36.188756909Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-25 10:41:40,957] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T10:41:36.189609522Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=850.983µs policy-db-migrator | -------------- kafka | [2024-04-25 10:41:40,964] INFO Loaded 0 logs in 10ms (kafka.log.LogManager) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-25T10:41:36.224344191Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-db-migrator | kafka | [2024-04-25 10:41:40,966] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | group.id = ae8023b6-4521-455f-bfa2-c4d8e9909c4a grafana | logger=migrator t=2024-04-25T10:41:36.230554305Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.210934ms policy-db-migrator | kafka | [2024-04-25 10:41:40,967] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-25T10:41:36.23751575Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | [2024-04-25 10:41:40,977] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-25T10:41:36.237621532Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=107.242µs policy-db-migrator | -------------- kafka | [2024-04-25 10:41:41,018] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-25T10:41:36.257463897Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | [2024-04-25 10:41:41,035] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-25T10:41:36.263811525Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.347488ms policy-db-migrator | -------------- kafka | [2024-04-25 10:41:41,048] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-25T10:41:36.268656364Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | kafka | [2024-04-25 10:41:41,086] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-04-25T10:41:36.269753302Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.096969ms policy-db-migrator | kafka | [2024-04-25 10:41:41,389] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-25T10:41:36.274730054Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | [2024-04-25 10:41:41,407] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-04-25T10:41:36.280962859Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.231445ms policy-db-migrator | -------------- kafka | [2024-04-25 10:41:41,407] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-04-25T10:41:36.284336259Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | [2024-04-25 10:41:41,413] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-04-25T10:41:36.284976086Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=638.847µs policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-25T10:41:36.288496929Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-04-25 10:41:41,417] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-25T10:41:36.28932141Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=824.532µs kafka | [2024-04-25 10:41:41,439] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-25T10:41:36.293579463Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-04-25 10:41:41,440] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-25T10:41:36.304380399Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.800816ms kafka | [2024-04-25 10:41:41,442] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-25T10:41:36.308348144Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-04-25 10:41:41,444] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-25T10:41:36.309067963Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=719.608µs kafka | [2024-04-25 10:41:41,445] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-25 10:41:41,460] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-04-25T10:41:36.313780758Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-25 10:41:41,464] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) grafana | logger=migrator t=2024-04-25T10:41:36.314946408Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.16568ms policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-25 10:41:41,488] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T10:41:36.319467318Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-pap | request.timeout.ms = 30000 kafka | [2024-04-25 10:41:41,513] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714041701502,1714041701502,1,0,0,72057608539340801,258,0,27 grafana | logger=migrator t=2024-04-25T10:41:36.320354361Z level=info msg="Migration successfully executed" id="create alert_image table" duration=886.883µs policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T10:41:36.323386331Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-25 10:41:41,516] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-04-25T10:41:36.324476981Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.09065ms policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null kafka | [2024-04-25 10:41:41,573] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) grafana | logger=migrator t=2024-04-25T10:41:36.327652605Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-25 10:41:41,580] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-25T10:41:36.327826629Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=173.634µs policy-db-migrator | kafka | [2024-04-25 10:41:41,588] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-04-25T10:41:36.330747056Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-04-25 10:41:41,589] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-04-25T10:41:36.331777583Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.029977ms kafka | [2024-04-25 10:41:41,593] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.336117739Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-04-25 10:41:41,603] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.337266709Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.14915ms kafka | [2024-04-25 10:41:41,605] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.341208204Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-25 10:41:41,609] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.342267741Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-25 10:41:41,609] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.346889204Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-04-25 10:41:41,612] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-04-25T10:41:36.347314085Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=423.161µs kafka | [2024-04-25 10:41:41,626] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.35129558Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-04-25 10:41:41,631] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.352102882Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=807.162µs kafka | [2024-04-25 10:41:41,632] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.35505245Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-04-25 10:41:41,646] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.361728536Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.675486ms kafka | [2024-04-25 10:41:41,646] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.364782147Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-04-25 10:41:41,652] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-04-25T10:41:36.365885367Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.10266ms kafka | [2024-04-25 10:41:41,657] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.370983131Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-04-25 10:41:41,659] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.372178803Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.195482ms kafka | [2024-04-25 10:41:41,670] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.375403398Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-04-25 10:41:41,674] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.376487517Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.083789ms kafka | [2024-04-25 10:41:41,679] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.382402464Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-04-25 10:41:41,685] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0580-toscadatatypes.sql grafana | logger=migrator t=2024-04-25T10:41:36.383533434Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.130779ms kafka | [2024-04-25 10:41:41,694] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.386690237Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-04-25 10:41:41,695] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.387704693Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.014586ms kafka | [2024-04-25 10:41:41,695] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.391146465Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-04-25 10:41:41,696] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.391175986Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.261µs kafka | [2024-04-25 10:41:41,696] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.397302528Z level=info msg="Executing migration" id="alter library_element model to mediumtext" kafka | [2024-04-25 10:41:41,697] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql grafana | logger=migrator t=2024-04-25T10:41:36.397476743Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=173.634µs kafka | [2024-04-25 10:41:41,699] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.403086161Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-04-25 10:41:41,699] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.403593794Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=506.313µs kafka | [2024-04-25 10:41:41,699] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.407434055Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-04-25 10:41:41,700] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.409719786Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.284821ms kafka | [2024-04-25 10:41:41,700] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.413778874Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-04-25 10:41:41,703] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0600-toscanodetemplate.sql grafana | logger=migrator t=2024-04-25T10:41:36.415366476Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.586541ms kafka | [2024-04-25 10:41:41,709] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.420926853Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-04-25 10:41:41,710] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.456446073Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.51799ms kafka | [2024-04-25 10:41:41,711] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.46914684Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-04-25 10:41:41,714] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.476273788Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.126509ms kafka | [2024-04-25 10:41:41,715] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.487866144Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-04-25 10:41:41,715] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | > upgrade 0610-toscanodetemplates.sql grafana | logger=migrator t=2024-04-25T10:41:36.488034799Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=169.255µs kafka | [2024-04-25 10:41:41,716] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.501759162Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-04-25 10:41:41,717] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.keystore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.5356826Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.924948ms kafka | [2024-04-25 10:41:41,717] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-04-25 10:41:41,719] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-04-25T10:41:36.567605945Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | kafka | [2024-04-25 10:41:41,720] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-04-25T10:41:36.607291595Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=39.68798ms policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-pap | ssl.provider = null kafka | [2024-04-25 10:41:41,723] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.610552361Z level=info msg="Executing migration" id="create kv_store table v1" policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-25 10:41:41,723] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.611216798Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=664.707µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.614943488Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-pap | ssl.truststore.certificates = null kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.615728028Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=784.53µs policy-pap | ssl.truststore.location = null kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.624175032Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-pap | ssl.truststore.password = null kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-04-25T10:41:36.624623943Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=446.171µs policy-pap | ssl.truststore.type = JKS kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.629825071Z level=info msg="Executing migration" id="create permission table" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 10:41:41,723] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.631273499Z level=info msg="Migration successfully executed" id="create permission table" duration=1.448398ms policy-pap | kafka | [2024-04-25 10:41:41,729] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.63620343Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-pap | [2024-04-25T10:42:10.801+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-25 10:41:41,735] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.63732075Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.11738ms policy-pap | [2024-04-25T10:42:10.801+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-25 10:41:41,735] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.643275797Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-pap | [2024-04-25T10:42:10.801+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041730801 kafka | [2024-04-25 10:41:41,735] INFO Kafka startTimeMs: 1714041701729 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-04-25T10:41:36.644387346Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.111809ms policy-pap | [2024-04-25T10:42:10.802+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-25 10:41:41,736] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.648010842Z level=info msg="Executing migration" id="create role table" policy-pap | [2024-04-25T10:42:10.802+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-04-25 10:41:41,736] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.648941847Z level=info msg="Migration successfully executed" id="create role table" duration=930.985µs policy-pap | [2024-04-25T10:42:10.802+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=da0a1662-3ba4-4ead-84f8-5e792fa1262c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4bc9451b kafka | [2024-04-25 10:41:41,736] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.652532882Z level=info msg="Executing migration" id="add column display_name" policy-pap | [2024-04-25T10:42:10.802+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=da0a1662-3ba4-4ead-84f8-5e792fa1262c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-25 10:41:41,737] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.660403751Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.869808ms policy-pap | [2024-04-25T10:42:10.803+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-04-25 10:41:41,737] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.666115552Z level=info msg="Executing migration" id="add column group_name" policy-pap | allow.auto.create.topics = true kafka | [2024-04-25 10:41:41,739] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql grafana | logger=migrator t=2024-04-25T10:41:36.673766324Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.650352ms policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-25 10:41:41,752] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.680001659Z level=info msg="Executing migration" id="add index role.org_id" policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-25 10:41:41,834] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.681290863Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.288244ms policy-pap | auto.offset.reset = latest kafka | [2024-04-25 10:41:41,898] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.687327442Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-25 10:41:41,899] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.688836323Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.508381ms policy-pap | check.crcs = true kafka | [2024-04-25 10:41:41,920] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.696916397Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-25 10:41:46,754] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0660-toscaparameter.sql grafana | logger=migrator t=2024-04-25T10:41:36.698080557Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.1638ms policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-04-25 10:41:46,755] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.704222279Z level=info msg="Executing migration" id="create team role table" policy-pap | client.rack = kafka | [2024-04-25 10:42:11,305] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-25T10:41:36.705109624Z level=info msg="Migration successfully executed" id="create team role table" duration=887.035µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-25 10:42:11,333] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-04-25T10:41:36.712051457Z level=info msg="Executing migration" id="add index team_role.org_id" policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-25 10:42:11,356] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T10:41:36.713391753Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.337366ms policy-db-migrator | policy-pap | enable.auto.commit = true kafka | [2024-04-25 10:42:11,359] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-25T10:41:36.719109794Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-db-migrator | policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-25T10:41:36.720273755Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.16408ms kafka | [2024-04-25 10:42:11,383] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(I_pe41tISTqFeXFGby1riA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(hVyfsWO8T6yln1x6wUyXKg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-25T10:41:36.725230106Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-04-25 10:42:11,384] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-25T10:41:36.726258643Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.027667ms kafka | [2024-04-25 10:42:11,386] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-25T10:41:36.731182863Z level=info msg="Executing migration" id="create user role table" kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-25T10:41:36.732504059Z level=info msg="Migration successfully executed" id="create user role table" duration=1.321646ms kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-25T10:41:36.739266187Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-25T10:41:36.740741296Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.478489ms kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | interceptor.classes = [] kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.748029759Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | internal.leave.group.on.close = true kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.749307293Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.278944ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.755173919Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | isolation.level = read_uncommitted kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.756626287Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.452097ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.761067735Z level=info msg="Executing migration" id="create builtin role table" policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-04-25T10:41:36.761998319Z level=info msg="Migration successfully executed" id="create builtin role table" duration=934.225µs policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.768525891Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | max.poll.records = 500 kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.769756754Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.231973ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-25 10:42:11,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.773663607Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | metric.reporters = [] kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.774798448Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.134771ms policy-pap | metrics.num.samples = 2 kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.779632865Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | metrics.recording.level = INFO kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-04-25T10:41:36.788542522Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.909207ms policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.793760569Z level=info msg="Executing migration" id="add index builtin_role.org_id" kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-25T10:41:36.79453378Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=773.161µs policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-25T10:41:36.798899335Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T10:41:36.80096128Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.065305ms kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-25T10:41:36.80435859Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-pap | retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.805372367Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.013707ms policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.8084978Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.jaas.config = null kafka | [2024-04-25 10:42:11,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.809524597Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.024556ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.815041883Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.815788912Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=744.409µs policy-db-migrator | policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.818968856Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.820571929Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.602653ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.8240215Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.833951483Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.931673ms policy-db-migrator | -------------- policy-pap | sasl.login.class = null kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.83948794Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.847176112Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.688412ms policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.850551433Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-25 10:42:11,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.858187394Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.635471ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.861427451Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.869098593Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.670713ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.89204045Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.893756666Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.712595ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.897441113Z level=info msg="Executing migration" id="add permission action scope role_id index" policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.89923096Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.789347ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.902850766Z level=info msg="Executing migration" id="remove permission role_id action scope index" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.903857543Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.006247ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.909474152Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.910342645Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=868.162µs policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.91357913Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.915217734Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.638014ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-25 10:42:11,391] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.918741967Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.918819969Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=78.032µs kafka | [2024-04-25 10:42:11,391] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.924458408Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-04-25 10:42:11,391] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.924487729Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=31.821µs kafka | [2024-04-25 10:42:11,391] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.929379458Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-04-25 10:42:11,391] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-25T10:41:36.930007805Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=628.857µs kafka | [2024-04-25 10:42:11,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.933676092Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-04-25 10:42:11,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.934481004Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=805.572µs kafka | [2024-04-25 10:42:11,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.938295254Z level=info msg="Executing migration" id="dashboard permissions uid scopes" kafka | [2024-04-25 10:42:11,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.938926691Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=631.707µs kafka | [2024-04-25 10:42:11,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:36.943907723Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.944104178Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=196.485µs policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-04-25 10:42:11,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T10:41:36.94796773Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.948660279Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=692.029µs policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.952662714Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.953837326Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.173952ms kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.958729995Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-04-25T10:41:36.960373649Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.642864ms kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.964377694Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-04-25T10:41:36.972177781Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.799727ms kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.975849598Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.975920399Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=70.661µs kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.979777522Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-04-25T10:41:36.980805439Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.027187ms kafka | [2024-04-25 10:42:11,405] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.987572609Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-25T10:41:36.989215372Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.646322ms kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:36.99330515Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:36.994944853Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.639323ms kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:36.999071423Z level=info msg="Executing migration" id="add correlation config column" policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.007171157Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.098834ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.012292453Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.013288159Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=994.056µs policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.017061964Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041730807 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.018294275Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.231051ms kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-04-25T10:41:37.023212589Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.045567841Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.356212ms policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-25T10:41:37.049583182Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=da0a1662-3ba4-4ead-84f8-5e792fa1262c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-25T10:41:37.050364372Z level=info msg="Migration successfully executed" id="create correlation v2" duration=780.75µs kafka | [2024-04-25 10:42:11,406] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:10.807+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ae8023b6-4521-455f-bfa2-c4d8e9909c4a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-25T10:41:37.05427687Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.055017769Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=740.859µs kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T10:42:10.808+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5a9b953-4e28-428f-887b-ecf3b35c91ee, alive=false, publisher=null]]: starting policy-db-migrator | > upgrade 0820-toscatrigger.sql grafana | logger=migrator t=2024-04-25T10:41:37.059873971Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-25T10:42:10.823+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.061631995Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.757674ms kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | acks = -1 grafana | logger=migrator t=2024-04-25T10:41:37.065727868Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-25T10:41:37.066804335Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.077267ms kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-04-25T10:41:37.070754724Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-25T10:41:37.071031181Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=276.317µs kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-04-25T10:41:37.075888704Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-04-25T10:41:37.077087874Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.19864ms kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | client.id = producer-1 grafana | logger=migrator t=2024-04-25T10:41:37.08212604Z level=info msg="Executing migration" id="add provisioning column" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | compression.type = none grafana | logger=migrator t=2024-04-25T10:41:37.093038965Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.913265ms kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-25T10:41:37.096663667Z level=info msg="Executing migration" id="create entity_events table" kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | delivery.timeout.ms = 120000 kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.097480787Z level=info msg="Migration successfully executed" id="create entity_events table" duration=818.09µs policy-pap | enable.idempotence = true kafka | [2024-04-25 10:42:11,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-25T10:41:37.102457282Z level=info msg="Executing migration" id="create dashboard public config v1" policy-pap | interceptor.classes = [] kafka | [2024-04-25 10:42:11,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.103459467Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.001825ms policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-25 10:42:11,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-04-25T10:41:37.107489908Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | linger.ms = 0 kafka | [2024-04-25 10:42:11,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.108154126Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | max.block.ms = 60000 policy-db-migrator | kafka | [2024-04-25 10:42:11,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.1119052Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | kafka | [2024-04-25 10:42:11,408] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.112560796Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | max.request.size = 1048576 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-25 10:42:11,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.118113746Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.118846334Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=730.398µs policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-04-25 10:42:11,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.122679811Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-04-25 10:42:11,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.123689156Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.008565ms kafka | [2024-04-25 10:42:11,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.128828735Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.130493548Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.667243ms kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.134751824Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.136492468Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.740364ms kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-25T10:41:37.141075903Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.142237173Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.1625ms policy-pap | partitioner.class = null kafka | [2024-04-25 10:42:11,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.14691453Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | partitioner.ignore.keys = false kafka | [2024-04-25 10:42:11,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.147908825Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=993.645µs policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-25 10:42:11,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.15170074Z level=info msg="Executing migration" id="Drop public config table" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-25 10:42:11,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.152440149Z level=info msg="Migration successfully executed" id="Drop public config table" duration=738.919µs policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-25 10:42:11,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) grafana | logger=migrator t=2024-04-25T10:41:37.156073341Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-pap | request.timeout.ms = 30000 kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.157198519Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.125108ms policy-pap | retries = 2147483647 kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.162228465Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-pap | retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.163292142Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.063297ms policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.166895173Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | sasl.jaas.config = null kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.16799575Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.100167ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-25 10:42:11,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-25 10:42:11,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.172543555Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-25 10:42:11,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.173603752Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.059956ms policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-25T10:41:37.177312935Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-04-25 10:42:11,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-25T10:41:37.200706874Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.392949ms kafka | [2024-04-25 10:42:11,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-25T10:41:37.204607371Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-04-25 10:42:11,608] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-25T10:41:37.212689095Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.079194ms kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-25T10:41:37.255936682Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-25T10:41:37.267783891Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=11.850479ms kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-25T10:41:37.271956025Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.272216682Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=213.725µs policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.275768441Z level=info msg="Executing migration" id="add share column" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.284005768Z level=info msg="Migration successfully executed" id="add share column" duration=8.236717ms kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-04-25T10:41:37.288993634Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.289148777Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=155.653µs kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-25T10:41:37.293865346Z level=info msg="Executing migration" id="create file table" kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-25T10:41:37.295333843Z level=info msg="Migration successfully executed" id="create file table" duration=1.469467ms kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-25T10:41:37.299105328Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-25T10:41:37.30077967Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.673482ms kafka | [2024-04-25 10:42:11,609] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-25T10:41:37.304436753Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:37.3055615Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.124048ms kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-25T10:41:37.310421913Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-25T10:41:37.311191872Z level=info msg="Migration successfully executed" id="create file_meta table" duration=769.929µs kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-25T10:41:37.314729031Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-25T10:41:37.315760767Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.031446ms kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-25T10:41:37.321094732Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-25T10:41:37.321162953Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=68.321µs kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | security.providers = null grafana | logger=migrator t=2024-04-25T10:41:37.324647291Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-25T10:41:37.324722783Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=75.751µs policy-db-migrator | kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-25T10:41:37.32939592Z level=info msg="Executing migration" id="managed permissions migration" policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-25T10:41:37.329946164Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=553.154µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,610] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-25T10:41:37.33336313Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-25T10:41:37.333555325Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=191.784µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-25T10:41:37.337319629Z level=info msg="Executing migration" id="RBAC action name migrator" policy-db-migrator | kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-25T10:41:37.338651523Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.327284ms policy-db-migrator | kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-25T10:41:37.342224302Z level=info msg="Executing migration" id="Add UID column to playlist" policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-25T10:41:37.351332461Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.107319ms policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-25T10:41:37.355942678Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-25T10:41:37.356093721Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=151.453µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-04-25T10:41:37.359825015Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-db-migrator | kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-25T10:41:37.360973494Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.147879ms policy-db-migrator | kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-25T10:41:37.364424611Z level=info msg="Executing migration" id="update group index for alert rules" policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-25T10:41:37.364826661Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=401.75µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-25T10:41:37.369390806Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-25T10:41:37.369585901Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=192.955µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-25T10:41:37.376132275Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-db-migrator | kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-25T10:41:37.37670888Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=576.954µs policy-db-migrator | kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-25T10:41:37.380254389Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.truststore.password = null kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.390377943Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.122784ms policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.396018025Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.405232448Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.214533ms policy-db-migrator | -------------- policy-pap | transactional.id = null kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.408823448Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.409585447Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=761.689µs policy-pap | kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-04-25T10:41:37.413878315Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-pap | [2024-04-25T10:42:10.836+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.48324533Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=69.340333ms policy-pap | [2024-04-25T10:42:10.850+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T10:41:37.48960383Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-pap | [2024-04-25T10:42:10.850+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.490441181Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=837.93µs policy-pap | [2024-04-25T10:42:10.850+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041730850 kafka | [2024-04-25 10:42:11,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.497093137Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-pap | [2024-04-25T10:42:10.851+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5a9b953-4e28-428f-887b-ecf3b35c91ee, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.497894818Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=801.431µs policy-pap | [2024-04-25T10:42:10.851+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5dcada6-d339-46c9-afaa-a3ea8e9d3895, alive=false, publisher=null]]: starting kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.50277517Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-pap | [2024-04-25T10:42:10.851+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.523489082Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=20.712052ms policy-pap | acks = -1 kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T10:41:37.529345459Z level=info msg="Executing migration" id="add origin column to seed_assignment" policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.535602316Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.257267ms policy-pap | batch.size = 16384 kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.539158235Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.539382162Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=224.577µs policy-pap | buffer.memory = 33554432 kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-25T10:41:37.542651354Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.542772907Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=121.533µs policy-pap | client.id = producer-2 kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-25T10:41:37.546166612Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-pap | compression.type = none policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.546315586Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=148.924µs policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.550645165Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.550803609Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=158.914µs policy-pap | enable.idempotence = true policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.557414775Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-pap | interceptor.classes = [] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.557563399Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=148.504µs policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.562527684Z level=info msg="Executing migration" id="create folder table" policy-pap | linger.ms = 0 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,616] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.563341454Z level=info msg="Migration successfully executed" id="create folder table" duration=814.18µs policy-pap | max.block.ms = 60000 policy-db-migrator | kafka | [2024-04-25 10:42:11,616] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.568507384Z level=info msg="Executing migration" id="Add index for parent_uid" policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.569506649Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=998.975µs policy-pap | max.request.size = 1048576 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.596254332Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.597321069Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.068247ms policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.600656182Z level=info msg="Executing migration" id="Update folder title length" policy-pap | metric.reporters = [] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.600685103Z level=info msg="Migration successfully executed" id="Update folder title length" duration=33.471µs policy-pap | metrics.num.samples = 2 policy-db-migrator | kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.605354921Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | metrics.recording.level = INFO policy-db-migrator | kafka | [2024-04-25 10:42:11,617] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.606393377Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.040585ms policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-04-25 10:42:11,619] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.613624989Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,622] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.615460845Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.839536ms policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.619187709Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | partitioner.class = null policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-04-25 10:42:11,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.621184949Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.99714ms kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.624862881Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-25T10:41:37.625555789Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=694.128µs policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.630635447Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | request.timeout.ms = 30000 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.630915404Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=279.997µs policy-pap | retries = 2147483647 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.633926949Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-pap | retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.635248123Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.322754ms policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.639049828Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.640384642Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.338194ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.644376402Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.64547405Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.098467ms policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.653349828Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.65539817Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.045811ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.66139464Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.66294134Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.54761ms policy-pap | sasl.login.class = null policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.669221437Z level=info msg="Executing migration" id="create anon_device table" policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-04-25T10:41:37.670809787Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.58866ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.677430834Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-04-25T10:41:37.678912921Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.502488ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.685684112Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.687611499Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.927238ms policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.694478153Z level=info msg="Executing migration" id="create signing_key table" policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-04-25T10:41:37.695493648Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.015805ms policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.699692714Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-04-25T10:41:37.701667653Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.975289ms policy-pap | security.providers = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.705440978Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-25T10:41:37.706687809Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.249341ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-25T10:41:37.71147945Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.712068215Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=589.105µs policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.716205869Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.727911804Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.707265ms policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.731545915Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | ssl.engine.factory.class = null kafka | [2024-04-25 10:42:11,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.732222241Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=677.706µs policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-04-25 10:42:11,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.740482139Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-25 10:42:11,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.741948397Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.466238ms policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-25 10:42:11,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.746757598Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" policy-db-migrator | policy-db-migrator | kafka | [2024-04-25 10:42:11,626] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.747876286Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.118968ms policy-pap | ssl.keystore.key = null policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-04-25 10:42:11,626] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.751398364Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,629] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.752518993Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.120969ms policy-pap | ssl.keystore.password = null policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-04-25 10:42:11,630] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.758643957Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.759861527Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.21737ms policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.763327184Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" policy-pap | ssl.provider = null policy-db-migrator | kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.764462673Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.134919ms policy-pap | ssl.secure.random.implementation = null policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.76792876Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.768948675Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.019195ms policy-pap | ssl.truststore.certificates = null policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.776362492Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | ssl.truststore.location = null policy-db-migrator | JOIN pdpstatistics b kafka | [2024-04-25 10:42:11,631] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.777155252Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=796.52µs policy-pap | ssl.truststore.password = null policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.780610309Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | ssl.truststore.type = JKS policy-db-migrator | SET a.id = b.id kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.780914797Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=305.348µs policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.784559688Z level=info msg="Executing migration" id="alter kv_store.value to longtext" policy-pap | transactional.id = null policy-db-migrator | kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.784706432Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=147.544µs policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.789690227Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" policy-pap | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.802706705Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.017388ms policy-pap | [2024-04-25T10:42:10.852+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,632] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.805983207Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" policy-pap | [2024-04-25T10:42:10.854+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.813649249Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.663102ms policy-pap | [2024-04-25T10:42:10.854+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.817501117Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" policy-pap | [2024-04-25T10:42:10.854+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714041730854 policy-db-migrator | kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.817813955Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=312.478µs policy-pap | [2024-04-25T10:42:10.854+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5dcada6-d339-46c9-afaa-a3ea8e9d3895, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-25T10:41:37.822286238Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.310600336s policy-pap | [2024-04-25T10:42:10.854+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=sqlstore t=2024-04-25T10:41:37.831429427Z level=info msg="Created default admin" user=admin policy-pap | [2024-04-25T10:42:10.855+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | -------------- grafana | logger=sqlstore t=2024-04-25T10:41:37.831710824Z level=info msg="Created default organization" kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.856+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=secrets t=2024-04-25T10:41:37.835988971Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-04-25 10:42:11,633] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.856+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | -------------- grafana | logger=plugin.store t=2024-04-25T10:41:37.856552519Z level=info msg="Loading plugins..." kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.858+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | grafana | logger=local.finder t=2024-04-25T10:41:37.894655527Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.859+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | grafana | logger=plugin.store t=2024-04-25T10:41:37.894683978Z level=info msg="Plugins loaded" count=55 duration=38.132099ms kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.862+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=query_data t=2024-04-25T10:41:37.901488009Z level=info msg="Query Service initialization" kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.862+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | -------------- grafana | logger=live.push_http t=2024-04-25T10:41:37.907565692Z level=info msg="Live Push Gateway initialization" kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.862+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=ngalert.migration t=2024-04-25T10:41:37.913703856Z level=info msg=Starting kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.862+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | -------------- grafana | logger=ngalert.migration t=2024-04-25T10:41:37.914019664Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-25 10:42:11,634] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.866+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | grafana | logger=ngalert.migration orgID=1 t=2024-04-25T10:41:37.914345662Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:10.868+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.318 seconds (process running for 10.967) policy-db-migrator | grafana | logger=ngalert.migration orgID=1 t=2024-04-25T10:41:37.914835825Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.309+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=ngalert.migration t=2024-04-25T10:41:37.91621553Z level=info msg="Completed alerting migration" kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.310+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-db-migrator | -------------- grafana | logger=ngalert.state.manager t=2024-04-25T10:41:37.960978985Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.310+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=infra.usagestats.collector t=2024-04-25T10:41:37.962612807Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.310+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-db-migrator | -------------- grafana | logger=provisioning.datasources t=2024-04-25T10:41:37.96435661Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-04-25 10:42:11,635] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.366+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=provisioning.alerting t=2024-04-25T10:41:37.978808834Z level=info msg="starting to provision alerting" kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.366+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Cluster ID: WEMOaayeQ5uYZKGI5dj_vQ policy-db-migrator | grafana | logger=provisioning.alerting t=2024-04-25T10:41:37.978826314Z level=info msg="finished to provision alerting" kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.433+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=grafanaStorageLogger t=2024-04-25T10:41:37.979098182Z level=info msg="Storage starting" kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.446+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-db-migrator | -------------- grafana | logger=ngalert.state.manager t=2024-04-25T10:41:37.980502617Z level=info msg="Warming state cache for startup" kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.447+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T10:41:37.982757993Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.493+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=http.server t=2024-04-25T10:41:37.983737268Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-04-25 10:42:11,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-25T10:42:11.569+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=sqlstore.transactions t=2024-04-25T10:41:38.010376111Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | [2024-04-25T10:42:11.609+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=sqlstore.transactions t=2024-04-25T10:41:38.021758126Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-pap | [2024-04-25T10:42:11.679+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=plugins.update.checker t=2024-04-25T10:41:38.067343425Z level=info msg="Update check succeeded" duration=86.612093ms policy-pap | [2024-04-25T10:42:11.720+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=ngalert.state.manager t=2024-04-25T10:41:38.080389159Z level=info msg="State cache has been initialized" states=0 duration=99.880983ms policy-pap | [2024-04-25T10:42:11.788+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=ngalert.scheduler t=2024-04-25T10:41:38.080558893Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 policy-pap | [2024-04-25T10:42:11.831+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=ticker t=2024-04-25T10:41:38.080661185Z level=info msg=starting first_tick=2024-04-25T10:41:40Z policy-pap | [2024-04-25T10:42:11.896+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=grafana.update.checker t=2024-04-25T10:41:38.085358574Z level=info msg="Update check succeeded" duration=103.623906ms policy-pap | [2024-04-25T10:42:11.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=provisioning.dashboard t=2024-04-25T10:41:38.10795854Z level=info msg="starting to provision dashboards" policy-pap | [2024-04-25T10:42:12.003+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=sqlstore.transactions t=2024-04-25T10:41:38.153887497Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | [2024-04-25T10:42:12.044+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=grafana-apiserver t=2024-04-25T10:41:38.213749628Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-pap | [2024-04-25T10:42:12.108+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=grafana-apiserver t=2024-04-25T10:41:38.214243809Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" policy-pap | [2024-04-25T10:42:12.155+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,676] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | -------------- grafana | logger=provisioning.dashboard t=2024-04-25T10:41:38.375651622Z level=info msg="finished to provision dashboards" policy-pap | [2024-04-25T10:42:12.213+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | grafana | logger=infra.usagestats t=2024-04-25T10:42:31.995866098Z level=info msg="Usage stats are ready to report" policy-pap | [2024-04-25T10:42:12.268+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:12.275+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | [2024-04-25T10:42:12.283+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] (Re-)joining group kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:12.315+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Request joining group due to: need to re-join with the given member-id: consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | [2024-04-25T10:42:12.316+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:12.316+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] (Re-)joining group kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:12.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:12.336+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-04-25T10:42:12.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b kafka | [2024-04-25 10:42:11,677] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:12.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-04-25T10:42:12.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:15.349+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c', protocol='range'} kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:15.351+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b', protocol='range'} kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:15.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-04-25T10:42:15.361+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Finished assignment for group at generation 1: {consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:15.391+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b', protocol='range'} kafka | [2024-04-25 10:42:11,678] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-04-25T10:42:15.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:15.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c', protocol='range'} kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-04-25T10:42:15.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-04-25T10:42:15.396+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-04-25T10:42:15.396+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-04-25T10:42:15.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-04-25 10:42:11,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | [2024-04-25T10:42:15.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | [2024-04-25T10:42:15.439+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-04-25T10:42:15.440+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3, groupId=ae8023b6-4521-455f-bfa2-c4d8e9909c4a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-04-25T10:42:19.647+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-04-25T10:42:19.647+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | [2024-04-25T10:42:19.650+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms policy-db-migrator | kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | [2024-04-25T10:42:32.719+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | [] kafka | [2024-04-25 10:42:11,681] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | [2024-04-25T10:42:32.720+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 10:42:11,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0fa1495a-5b77-4869-a732-547adcbc362c","timestampMs":1714041752672,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup"} kafka | [2024-04-25 10:42:11,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:32.721+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 10:42:11,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0fa1495a-5b77-4869-a732-547adcbc362c","timestampMs":1714041752672,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup"} kafka | [2024-04-25 10:42:11,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-pap | [2024-04-25T10:42:32.731+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-25 10:42:11,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.816+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-pap | [2024-04-25T10:42:32.816+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting listener kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.816+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting timer kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:32.817+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=6ebfae58-4293-4386-b05b-c7000ecfd79f, expireMs=1714041782817] kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.819+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting enqueue kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-pap | [2024-04-25T10:42:32.819+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate started kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.819+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=6ebfae58-4293-4386-b05b-c7000ecfd79f, expireMs=1714041782817] kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-25T10:42:32.821+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6ebfae58-4293-4386-b05b-c7000ecfd79f","timestampMs":1714041752796,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 10:42:11,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | [2024-04-25T10:42:32.857+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-25 10:42:11,688] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6ebfae58-4293-4386-b05b-c7000ecfd79f","timestampMs":1714041752796,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 10:42:11,688] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-04-25T10:42:32.857+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-25 10:42:11,688] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.866+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-25 10:42:11,688] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6ebfae58-4293-4386-b05b-c7000ecfd79f","timestampMs":1714041752796,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-25 10:42:11,689] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-25T10:42:32.866+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-04-25 10:42:11,692] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-pap | [2024-04-25T10:42:32.892+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,694] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6ebfae58-4293-4386-b05b-c7000ecfd79f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c6c6ff3e-5ca5-4702-be12-6a402ad60812","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 10:42:11,740] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.893+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping policy-db-migrator | kafka | [2024-04-25 10:42:11,750] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping enqueue policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-04-25 10:42:11,753] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping timer policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,754] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.894+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6ebfae58-4293-4386-b05b-c7000ecfd79f, expireMs=1714041782817] policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-04-25 10:42:11,756] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping listener policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,771] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopped policy-db-migrator | kafka | [2024-04-25 10:42:11,772] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.897+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | kafka | [2024-04-25 10:42:11,772] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-04-25 10:42:11,772] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6ebfae58-4293-4386-b05b-c7000ecfd79f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c6c6ff3e-5ca5-4702-be12-6a402ad60812","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,772] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.898+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6ebfae58-4293-4386-b05b-c7000ecfd79f policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-04-25 10:42:11,784] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.898+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,785] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a220bbc8-0a2a-4e9c-95c6-992c77aee261","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 10:42:11,785] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate successful policy-db-migrator | msg kafka | [2024-04-25 10:42:11,785] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b start publishing next request policy-db-migrator | upgrade to 1100 completed kafka | [2024-04-25 10:42:11,785] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange starting policy-db-migrator | kafka | [2024-04-25 10:42:11,798] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange starting listener policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-04-25 10:42:11,799] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange starting timer policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,799] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c4d7f70b-26e2-41cb-907b-c7984f55b821, expireMs=1714041782901] policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-04-25 10:42:11,799] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange starting enqueue policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,799] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange started policy-db-migrator | kafka | [2024-04-25 10:42:11,808] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.901+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=c4d7f70b-26e2-41cb-907b-c7984f55b821, expireMs=1714041782901] policy-db-migrator | kafka | [2024-04-25 10:42:11,809] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.902+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-04-25 10:42:11,809] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c4d7f70b-26e2-41cb-907b-c7984f55b821","timestampMs":1714041752797,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,809] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-04-25 10:42:11,809] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"a220bbc8-0a2a-4e9c-95c6-992c77aee261","timestampMs":1714041752879,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,816] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.942+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | kafka | [2024-04-25 10:42:11,817] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.949+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,817] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c4d7f70b-26e2-41cb-907b-c7984f55b821","timestampMs":1714041752797,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-04-25 10:42:11,817] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.949+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,817] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.957+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-25 10:42:11,828] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c4d7f70b-26e2-41cb-907b-c7984f55b821","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"4ea67bf2-9839-4197-ab46-ed6c3acb2c53","timestampMs":1714041752912,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 10:42:11,829] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange stopping policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-04-25 10:42:11,829] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange stopping enqueue policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,829] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange stopping timer policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-25 10:42:11,829] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c4d7f70b-26e2-41cb-907b-c7984f55b821, expireMs=1714041782901] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,836] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange stopping listener policy-db-migrator | kafka | [2024-04-25 10:42:11,837] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.959+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange stopped policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,837] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpStateChange successful policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-04-25 10:42:11,837] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b start publishing next request policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,837] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting policy-db-migrator | kafka | [2024-04-25 10:42:11,841] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting listener policy-db-migrator | kafka | [2024-04-25 10:42:11,841] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting timer policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-04-25 10:42:11,841] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=5e9e39dd-7384-4924-84ba-e31746826a81, expireMs=1714041782960] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,842] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate starting enqueue policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-25 10:42:11,842] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate started policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,849] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.960+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-25 10:42:11,850] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5e9e39dd-7384-4924-84ba-e31746826a81","timestampMs":1714041752933,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,850] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.966+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-04-25 10:42:11,850] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c4d7f70b-26e2-41cb-907b-c7984f55b821","timestampMs":1714041752797,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,850] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.966+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | kafka | [2024-04-25 10:42:11,857] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.970+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,857] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-04-25 10:42:11,857] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c4d7f70b-26e2-41cb-907b-c7984f55b821","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"4ea67bf2-9839-4197-ab46-ed6c3acb2c53","timestampMs":1714041752912,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,857] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.971+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c4d7f70b-26e2-41cb-907b-c7984f55b821 policy-db-migrator | kafka | [2024-04-25 10:42:11,857] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.974+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-25 10:42:11,863] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5e9e39dd-7384-4924-84ba-e31746826a81","timestampMs":1714041752933,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-04-25 10:42:11,863] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,863] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.975+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-04-25 10:42:11,864] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-af9137d4-c462-4753-8fd3-bdb6b1fa2cb4","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"5e9e39dd-7384-4924-84ba-e31746826a81","timestampMs":1714041752933,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,864] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.975+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | kafka | [2024-04-25 10:42:11,869] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.982+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,869] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5e9e39dd-7384-4924-84ba-e31746826a81","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e2d900bb-8aba-4cb6-9a8b-2eab60d13d50","timestampMs":1714041752973,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-04-25 10:42:11,870] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,870] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping enqueue policy-db-migrator | kafka | [2024-04-25 10:42:11,870] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping timer policy-db-migrator | kafka | [2024-04-25 10:42:11,875] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=5e9e39dd-7384-4924-84ba-e31746826a81, expireMs=1714041782960] policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopping listener policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,875] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate stopped policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-04-25 10:42:11,875] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:32.984+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,875] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5e9e39dd-7384-4924-84ba-e31746826a81","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e2d900bb-8aba-4cb6-9a8b-2eab60d13d50","timestampMs":1714041752973,"name":"apex-48c572ef-ecee-4a67-903e-0092df74361b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-25 10:42:11,875] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:32.984+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5e9e39dd-7384-4924-84ba-e31746826a81 policy-db-migrator | kafka | [2024-04-25 10:42:11,881] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:32.988+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b PdpUpdate successful policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-04-25 10:42:11,882] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:32.988+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-48c572ef-ecee-4a67-903e-0092df74361b has no more requests policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,882] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:40.114+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-db-migrator | DROP TABLE statistics_sequence kafka | [2024-04-25 10:42:11,882] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:40.175+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | -------------- kafka | [2024-04-25 10:42:11,882] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:40.185+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | kafka | [2024-04-25 10:42:11,887] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:40.190+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-04-25 10:42:11,888] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:40.619+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup policy-db-migrator | name version kafka | [2024-04-25 10:42:11,888] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:41.095+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup policy-db-migrator | policyadmin 1300 kafka | [2024-04-25 10:42:11,888] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:41.096+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-04-25 10:42:11,888] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:41.632+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,895] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:41.856+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,896] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:41.971+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,896] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:41.972+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,896] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:41.972+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,897] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:41.999+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T10:42:41Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T10:42:41Z, user=policyadmin)] policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,907] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:42.705+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:41 kafka | [2024-04-25 10:42:11,908] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:42.707+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,908] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:42.707+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,908] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:42.707+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,908] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:42.707+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,915] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:42.719+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T10:42:42Z, user=policyadmin)] policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,915] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:43.071+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,915] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:43.071+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,915] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:43.071+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,916] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:42:43.072+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,921] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:42:43.072+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,922] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-25T10:42:43.072+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,922] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:42:43.083+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T10:42:43Z, user=policyadmin)] policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,922] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-25T10:43:02.817+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=6ebfae58-4293-4386-b05b-c7000ecfd79f, expireMs=1714041782817] policy-pap | [2024-04-25T10:43:02.902+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c4d7f70b-26e2-41cb-907b-c7984f55b821, expireMs=1714041782901] kafka | [2024-04-25 10:42:11,922] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-25T10:43:03.667+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,928] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-25T10:43:03.669+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,929] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,929] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,929] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,929] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,936] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,938] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,938] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,938] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,938] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,949] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,949] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:42 kafka | [2024-04-25 10:42:11,949] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,949] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,950] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,967] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,968] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,968] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,968] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,969] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,977] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:11,978] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:11,978] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,978] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,978] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,986] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,986] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,986] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,987] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,987] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,994] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,995] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,995] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,995] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:11,996] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:12,007] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:12,008] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:12,008] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:43 kafka | [2024-04-25 10:42:12,008] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,008] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,023] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,024] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,024] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,024] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,024] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,033] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,034] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,034] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,034] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,034] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,042] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,042] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,042] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,042] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,042] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,049] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,049] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,049] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,049] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,050] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,060] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,061] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:44 kafka | [2024-04-25 10:42:12,061] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,061] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,061] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,069] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,070] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,070] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,070] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,070] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,075] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,075] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,075] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,075] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,076] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(I_pe41tISTqFeXFGby1riA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241041410800u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,083] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,084] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,084] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:45 kafka | [2024-04-25 10:42:12,084] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,084] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,091] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,091] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,092] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,092] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,092] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,101] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,102] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,102] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241041410900u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,102] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,102] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,114] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,115] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,115] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,115] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,115] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,121] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,122] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241041411000u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,122] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241041411100u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,122] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241041411200u 1 2024-04-25 10:41:46 kafka | [2024-04-25 10:42:12,122] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241041411200u 1 2024-04-25 10:41:47 kafka | [2024-04-25 10:42:12,128] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241041411200u 1 2024-04-25 10:41:47 kafka | [2024-04-25 10:42:12,129] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241041411200u 1 2024-04-25 10:41:47 kafka | [2024-04-25 10:42:12,129] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241041411300u 1 2024-04-25 10:41:47 kafka | [2024-04-25 10:42:12,129] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241041411300u 1 2024-04-25 10:41:47 kafka | [2024-04-25 10:42:12,129] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241041411300u 1 2024-04-25 10:41:47 policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-04-25 10:42:12,134] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,135] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,135] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,135] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,135] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,143] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,144] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,144] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,144] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,144] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,158] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,159] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,159] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,160] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,160] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,167] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,167] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,167] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,168] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,168] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,175] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,175] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,175] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,175] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,175] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,184] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,184] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,184] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,184] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,184] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,191] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,192] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,192] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,192] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,192] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,201] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,201] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,201] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,201] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,201] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,208] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,209] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,209] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,209] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,209] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,215] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-04-25 10:42:12,215] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-04-25 10:42:12,215] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,215] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-25 10:42:12,215] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(hVyfsWO8T6yln1x6wUyXKg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-04-25 10:42:12,222] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-04-25 10:42:12,228] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,231] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,236] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,237] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,237] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,238] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,238] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,238] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,239] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,240] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,240] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,241] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,241] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,242] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,242] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,243] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,243] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,244] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,245] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,245] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,245] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,245] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,245] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,245] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,245] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-25 10:42:12,247] INFO [Broker id=1] Finished LeaderAndIsr request in 618ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-04-25 10:42:12,251] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hVyfsWO8T6yln1x6wUyXKg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=I_pe41tISTqFeXFGby1riA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,259] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,260] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,261] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-25 10:42:12,261] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-25 10:42:12,305] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ae8023b6-4521-455f-bfa2-c4d8e9909c4a in Empty state. Created a new member id consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,330] INFO [GroupCoordinator 1]: Preparing to rebalance group ae8023b6-4521-455f-bfa2-c4d8e9909c4a in state PreparingRebalance with old generation 0 (__consumer_offsets-47) (reason: Adding new member consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,340] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:12,343] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:13,027] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 76090dad-2cb8-4045-86c4-b86ef46522aa in Empty state. Created a new member id consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:13,030] INFO [GroupCoordinator 1]: Preparing to rebalance group 76090dad-2cb8-4045-86c4-b86ef46522aa in state PreparingRebalance with old generation 0 (__consumer_offsets-42) (reason: Adding new member consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:15,345] INFO [GroupCoordinator 1]: Stabilized group ae8023b6-4521-455f-bfa2-c4d8e9909c4a generation 1 (__consumer_offsets-47) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:15,350] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:15,373] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ae8023b6-4521-455f-bfa2-c4d8e9909c4a-3-f765562a-a606-4409-8c69-863d34978d9c for group ae8023b6-4521-455f-bfa2-c4d8e9909c4a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:15,373] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cdfc5ded-faf1-49e7-8ba5-b4647d456a0b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:16,036] INFO [GroupCoordinator 1]: Stabilized group 76090dad-2cb8-4045-86c4-b86ef46522aa generation 1 (__consumer_offsets-42) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-25 10:42:16,057] INFO [GroupCoordinator 1]: Assignment received from leader consumer-76090dad-2cb8-4045-86c4-b86ef46522aa-2-ea339298-3e8b-490e-80bd-394833829db7 for group 76090dad-2cb8-4045-86c4-b86ef46522aa for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping grafana ... Stopping kafka ... Stopping simulator ... Stopping prometheus ... Stopping zookeeper ... Stopping mariadb ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing kafka ... Removing simulator ... Removing prometheus ... Removing zookeeper ... Removing mariadb ... Removing policy-api ... done Removing policy-db-migrator ... done Removing simulator ... done Removing policy-apex-pdp ... done Removing policy-pap ... done Removing grafana ... done Removing mariadb ... done Removing kafka ... done Removing prometheus ... done Removing zookeeper ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.WRjITlmLsr ]] + rsync -av /tmp/tmp.WRjITlmLsr/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,562 bytes received 95 bytes 1,837,314.00 bytes/sec total size is 918,016 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2020 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10692993561459029585.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14237001119346304781.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16154758892979204529.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-kcYA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-kcYA/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1413387671634480035.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config13791919434602899098tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5797734003700135227.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10322314169973983744.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-kcYA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-kcYA/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1265891026422047925.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4792687699722499014.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-kcYA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-kcYA/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6716215521264551628.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-kcYA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-kcYA/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1660 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-25963 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 827 24972 0 6366 30883 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:fd:9b:c7 brd ff:ff:ff:ff:ff:ff inet 10.30.107.12/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85955sec preferred_lft 85955sec inet6 fe80::f816:3eff:fefd:9bc7/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:42:b6:89:75 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25963) 04/25/24 _x86_64_ (8 CPU) 10:38:03 LINUX RESTART (8 CPU) 10:39:01 tps rtps wtps bread/s bwrtn/s 10:40:01 137.13 27.90 109.23 2102.32 35599.13 10:41:01 150.31 9.47 140.84 1674.65 55408.77 10:42:01 466.36 11.68 454.67 775.30 142878.47 10:43:01 31.51 0.65 30.86 26.92 24584.02 10:44:01 18.30 0.00 18.30 0.00 23860.69 10:45:01 65.09 1.37 63.72 108.52 12925.25 Average: 144.78 8.51 136.27 781.26 49208.70 10:39:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 10:40:01 29955468 31718708 2983752 9.06 80596 1984384 1443788 4.25 862488 1813100 128532 10:41:01 26575796 31675716 6363424 19.32 135732 5125172 1598380 4.70 1022592 4849424 2370428 10:42:01 23961052 30163092 8978168 27.26 157336 6147160 8503436 25.02 2679840 5693384 400 10:43:01 23456340 29664460 9482880 28.79 158812 6149368 8713384 25.64 3222724 5648244 216 10:44:01 23441964 29650936 9497256 28.83 158936 6149940 8713384 25.64 3235072 5648768 232 10:45:01 25570480 31621596 7368740 22.37 160456 6009248 1512840 4.45 1310780 5508916 28480 Average: 25493517 30749085 7445703 22.60 141978 5260879 5080869 14.95 2055583 4860306 421381 10:39:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 10:40:01 lo 1.13 1.13 0.13 0.13 0.00 0.00 0.00 0.00 10:40:01 ens3 67.39 44.98 1199.42 8.60 0.00 0.00 0.00 0.00 10:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:41:01 lo 11.73 11.73 1.15 1.15 0.00 0.00 0.00 0.00 10:41:01 ens3 1006.98 505.18 23461.34 37.83 0.00 0.00 0.00 0.00 10:41:01 br-cd0145f85572 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:42:01 veth3403fd8 24.10 22.30 10.52 16.08 0.00 0.00 0.00 0.00 10:42:01 vethe3aa713 1.12 1.28 0.07 0.08 0.00 0.00 0.00 0.00 10:42:01 veth0bf6feb 50.79 62.19 18.95 15.01 0.00 0.00 0.00 0.00 10:42:01 veth2839fd1 1.68 1.83 0.17 0.18 0.00 0.00 0.00 0.00 10:43:01 veth3403fd8 21.88 17.68 6.80 23.82 0.00 0.00 0.00 0.00 10:43:01 vethe3aa713 47.18 42.00 14.58 38.58 0.00 0.00 0.00 0.00 10:43:01 veth0bf6feb 47.47 58.71 59.64 17.28 0.00 0.00 0.00 0.00 10:43:01 veth2839fd1 18.66 15.34 2.20 2.29 0.00 0.00 0.00 0.00 10:44:01 veth3403fd8 0.32 0.35 0.58 0.03 0.00 0.00 0.00 0.00 10:44:01 vethe3aa713 8.65 11.45 2.26 1.42 0.00 0.00 0.00 0.00 10:44:01 veth0bf6feb 1.50 1.70 0.64 0.56 0.00 0.00 0.00 0.00 10:44:01 veth2839fd1 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 10:45:01 lo 35.09 35.09 6.23 6.23 0.00 0.00 0.00 0.00 10:45:01 ens3 1816.03 994.37 34282.71 151.99 0.00 0.00 0.00 0.00 10:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 5.04 5.04 0.96 0.96 0.00 0.00 0.00 0.00 Average: ens3 247.38 126.55 5552.21 15.10 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25963) 04/25/24 _x86_64_ (8 CPU) 10:38:03 LINUX RESTART (8 CPU) 10:39:01 CPU %user %nice %system %iowait %steal %idle 10:40:01 all 10.68 0.00 0.77 1.73 0.04 86.78 10:40:01 0 12.36 0.00 0.72 0.33 0.03 86.56 10:40:01 1 8.56 0.00 0.81 0.25 0.05 90.32 10:40:01 2 6.26 0.00 0.62 1.02 0.05 92.05 10:40:01 3 1.22 0.00 0.40 0.33 0.00 98.05 10:40:01 4 15.36 0.00 0.52 0.50 0.07 83.55 10:40:01 5 2.67 0.00 0.60 0.13 0.02 96.58 10:40:01 6 16.28 0.00 1.29 10.35 0.03 72.04 10:40:01 7 22.77 0.00 1.20 0.97 0.03 75.03 10:41:01 all 13.55 0.00 4.85 2.34 0.06 79.20 10:41:01 0 10.56 0.00 3.93 0.12 0.05 85.34 10:41:01 1 14.51 0.00 4.45 4.48 0.07 76.50 10:41:01 2 11.86 0.00 4.26 3.18 0.05 80.65 10:41:01 3 11.80 0.00 4.57 0.90 0.05 82.68 10:41:01 4 20.62 0.00 6.87 0.88 0.07 71.57 10:41:01 5 17.21 0.00 5.48 3.12 0.07 74.12 10:41:01 6 11.18 0.00 4.29 4.94 0.07 79.53 10:41:01 7 10.65 0.00 4.87 1.17 0.08 83.22 10:42:01 all 18.90 0.00 4.09 6.20 0.07 70.75 10:42:01 0 17.36 0.00 3.85 1.51 0.05 77.23 10:42:01 1 13.58 0.00 4.59 30.76 0.07 51.01 10:42:01 2 16.65 0.00 4.16 4.67 0.07 74.45 10:42:01 3 24.24 0.00 4.51 3.36 0.07 67.82 10:42:01 4 21.03 0.00 3.62 1.04 0.07 74.24 10:42:01 5 14.27 0.00 3.51 3.22 0.07 78.93 10:42:01 6 25.03 0.00 4.38 3.52 0.07 67.00 10:42:01 7 19.00 0.00 4.15 1.66 0.07 75.12 10:43:01 all 15.99 0.00 1.52 0.72 0.07 81.70 10:43:01 0 19.40 0.00 2.30 0.03 0.07 78.19 10:43:01 1 10.75 0.00 1.04 0.03 0.08 88.10 10:43:01 2 17.32 0.00 1.57 1.84 0.05 79.22 10:43:01 3 19.63 0.00 1.52 0.07 0.07 78.72 10:43:01 4 18.24 0.00 1.50 0.03 0.07 80.16 10:43:01 5 12.72 0.00 1.47 0.02 0.05 85.74 10:43:01 6 16.58 0.00 1.45 0.03 0.10 81.83 10:43:01 7 13.31 0.00 1.29 3.73 0.07 81.61 10:44:01 all 0.88 0.00 0.14 0.87 0.06 98.06 10:44:01 0 0.38 0.00 0.10 0.00 0.03 99.48 10:44:01 1 1.00 0.00 0.08 0.00 0.03 98.88 10:44:01 2 0.63 0.00 0.15 6.77 0.05 92.40 10:44:01 3 0.95 0.00 0.12 0.00 0.07 98.86 10:44:01 4 0.85 0.00 0.10 0.00 0.05 99.00 10:44:01 5 0.95 0.00 0.13 0.00 0.07 98.85 10:44:01 6 0.95 0.00 0.20 0.00 0.12 98.73 10:44:01 7 1.28 0.00 0.18 0.17 0.07 98.30 10:45:01 all 4.88 0.00 0.77 0.63 0.05 93.67 10:45:01 0 2.87 0.00 0.60 0.07 0.05 96.41 10:45:01 1 5.40 0.00 0.62 0.15 0.05 93.79 10:45:01 2 3.36 0.00 0.68 3.16 0.08 92.72 10:45:01 3 3.59 0.00 0.78 0.08 0.03 95.51 10:45:01 4 2.17 0.00 0.82 0.33 0.03 96.65 10:45:01 5 16.89 0.00 1.20 0.13 0.07 81.71 10:45:01 6 3.74 0.00 0.77 0.07 0.05 95.37 10:45:01 7 1.10 0.00 0.65 0.99 0.03 97.23 Average: all 10.79 0.00 2.01 2.08 0.06 85.06 Average: 0 10.48 0.00 1.91 0.34 0.05 87.21 Average: 1 8.94 0.00 1.92 5.89 0.06 83.19 Average: 2 9.32 0.00 1.89 3.44 0.06 85.30 Average: 3 10.20 0.00 1.97 0.79 0.05 86.99 Average: 4 13.02 0.00 2.22 0.46 0.06 84.24 Average: 5 10.77 0.00 2.06 1.10 0.06 86.02 Average: 6 12.28 0.00 2.05 3.14 0.07 82.45 Average: 7 11.34 0.00 2.05 1.45 0.06 85.10