Started by upstream project "policy-docker-master-merge-java" build number 349 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137725 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-25485 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-rJyZ08sy0nxh/agent.2085 SSH_AGENT_PID=2087 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8817876947068470545.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8817876947068470545.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 427f193118436b2aa7664f72fcb16ca1b25b8061 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 427f193118436b2aa7664f72fcb16ca1b25b8061 # timeout=30 Commit message: "Merge "Add Participant Simulator chart"" > git rev-list --no-walk deb0e121d5b4b9bd68334c2565aae21d8eed0d21 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14142152991202834662.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-oy3i lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-oy3i/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.90 botocore==1.34.90 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.4 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.1 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15207724289940858301.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins13317469723595768265.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.8RCxNEnqm6 ++ echo ROBOT_VENV=/tmp/tmp.8RCxNEnqm6 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.8RCxNEnqm6 ++ source /tmp/tmp.8RCxNEnqm6/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.8RCxNEnqm6 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.8RCxNEnqm6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.8RCxNEnqm6) ' '!=' x ']' +++ PS1='(tmp.8RCxNEnqm6) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.8RCxNEnqm6/src/onap ++ rm -rf /tmp/tmp.8RCxNEnqm6/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.8RCxNEnqm6/bin/activate + '[' -z /tmp/tmp.8RCxNEnqm6/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.8RCxNEnqm6/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.8RCxNEnqm6 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.8RCxNEnqm6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.8RCxNEnqm6) ' ++ '[' 'x(tmp.8RCxNEnqm6) ' '!=' x ']' ++ PS1='(tmp.8RCxNEnqm6) (tmp.8RCxNEnqm6) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.9nztubu5q5 + cd /tmp/tmp.9nztubu5q5 + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:59f0448c5bbe494c6652e1913320d9fe99024bcaef51f510204d55770b94ba9d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating zookeeper ... Creating prometheus ... Creating simulator ... Creating mariadb ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating simulator ... done Creating zookeeper ... done Creating kafka ... Creating policy-api ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 12 seconds policy-api Up 13 seconds grafana Up 17 seconds simulator Up 15 seconds mariadb Up 17 seconds prometheus Up 18 seconds zookeeper Up 14 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 17 seconds policy-api Up 18 seconds grafana Up 22 seconds simulator Up 20 seconds mariadb Up 22 seconds prometheus Up 23 seconds zookeeper Up 19 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds policy-api Up 23 seconds grafana Up 27 seconds simulator Up 25 seconds mariadb Up 27 seconds prometheus Up 28 seconds zookeeper Up 24 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds policy-api Up 28 seconds grafana Up 32 seconds simulator Up 30 seconds mariadb Up 32 seconds prometheus Up 33 seconds zookeeper Up 29 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds policy-api Up 33 seconds grafana Up 38 seconds simulator Up 35 seconds mariadb Up 37 seconds prometheus Up 38 seconds zookeeper Up 34 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 37 seconds policy-api Up 38 seconds grafana Up 43 seconds simulator Up 40 seconds mariadb Up 42 seconds prometheus Up 43 seconds zookeeper Up 39 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:58:54 up 4 min, 0 users, load average: 3.97, 1.77, 0.70 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.5 us, 2.6 sy, 0.0 ni, 79.4 id, 5.4 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 37 seconds policy-api Up 38 seconds grafana Up 43 seconds simulator Up 41 seconds mariadb Up 42 seconds prometheus Up 44 seconds zookeeper Up 39 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cf9114871e24 policy-apex-pdp 15.80% 178MiB / 31.41GiB 0.55% 9.93kB / 19.8kB 0B / 0B 49 57873205d9c3 policy-pap 5.49% 502.3MiB / 31.41GiB 1.56% 35.9kB / 38.8kB 0B / 149MB 63 84563581413a kafka 26.30% 381.6MiB / 31.41GiB 1.19% 82.7kB / 84.8kB 0B / 508kB 85 964cc166306f policy-api 0.10% 464.7MiB / 31.41GiB 1.44% 989kB / 673kB 0B / 0B 52 b100b2b1ca2d grafana 0.04% 58MiB / 31.41GiB 0.18% 19.2kB / 3.55kB 0B / 24.9MB 19 264c388f7a92 simulator 0.07% 120.5MiB / 31.41GiB 0.37% 1.31kB / 0B 0B / 0B 76 b52d89a02784 mariadb 0.01% 102.3MiB / 31.41GiB 0.32% 933kB / 1.18MB 11MB / 68.4MB 37 d3c2b924e83b prometheus 0.48% 20.21MiB / 31.41GiB 0.06% 39.5kB / 1.95kB 131kB / 0B 13 3a03b3ec39eb zookeeper 0.10% 101.1MiB / 31.41GiB 0.31% 61.2kB / 54.6kB 0B / 381kB 60 + echo + cd /tmp/tmp.9nztubu5q5 + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | DEPLOYMENT != UNDEPLOYMENT ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.9nztubu5q5/output.xml Log: /tmp/tmp.9nztubu5q5/log.html Report: /tmp/tmp.9nztubu5q5/report.html + RESULT=1 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 09:00:44 up 6 min, 0 users, load average: 1.08, 1.42, 0.70 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.1 us, 2.0 sy, 0.0 ni, 83.4 id, 4.4 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cf9114871e24 policy-apex-pdp 0.45% 181.8MiB / 31.41GiB 0.57% 57.3kB / 92.1kB 0B / 0B 52 57873205d9c3 policy-pap 1.01% 492.3MiB / 31.41GiB 1.53% 2.47MB / 1.05MB 0B / 149MB 67 84563581413a kafka 2.21% 388.1MiB / 31.41GiB 1.21% 251kB / 225kB 0B / 606kB 85 964cc166306f policy-api 0.14% 510.9MiB / 31.41GiB 1.59% 2.45MB / 1.13MB 0B / 0B 55 b100b2b1ca2d grafana 0.04% 64.34MiB / 31.41GiB 0.20% 20kB / 4.5kB 0B / 24.9MB 19 264c388f7a92 simulator 0.09% 120.7MiB / 31.41GiB 0.38% 1.54kB / 0B 0B / 0B 78 b52d89a02784 mariadb 0.01% 103.6MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 68.7MB 28 d3c2b924e83b prometheus 0.06% 25.5MiB / 31.41GiB 0.08% 219kB / 11.8kB 131kB / 0B 13 3a03b3ec39eb zookeeper 0.10% 100.9MiB / 31.41GiB 0.31% 64kB / 56.1kB 0B / 381kB 60 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, simulator, mariadb, prometheus, zookeeper kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-04-24 08:58:21,002] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,002] INFO Client environment:host.name=84563581413a (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,006] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,009] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-24 08:58:21,013] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-24 08:58:21,020] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:21,034] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:21,035] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:21,042] INFO Socket connection established, initiating session, client: /172.17.0.9:39730, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:21,078] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003e8bf0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:21,195] INFO Session: 0x1000003e8bf0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:21,196] INFO EventThread shut down for session: 0x1000003e8bf0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-04-24 08:58:21,835] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-24 08:58:22,147] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-24 08:58:22,223] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-24 08:58:22,224] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-24 08:58:22,224] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-24 08:58:22,235] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-24 08:58:22,239] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:host.name=84563581413a (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,241] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-24 08:58:22,244] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-24 08:58:22,249] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:22,251] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-24 08:58:22,255] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:22,263] INFO Socket connection established, initiating session, client: /172.17.0.9:39732, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:22,270] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003e8bf0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-24 08:58:22,274] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-24 08:58:22,527] INFO Cluster ID = FWpz7Mn1RFGDoEChXT3QPg (kafka.server.KafkaServer) kafka | [2024-04-24 08:58:22,529] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-04-24 08:58:22,582] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.4:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-04-24T08:58:51.903+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-24T08:58:52.077+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 6c14929a-34c8-48a0-adf2-d542a07b4ce8 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO grafana | logger=settings t=2024-04-24T08:58:11.625991891Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-24T08:58:11Z policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | Waiting for mariadb port 3306... kafka | ssl.keystore.key = null grafana | logger=settings t=2024-04-24T08:58:11.626843575Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-pap | Waiting for mariadb port 3306... policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.4:3306) open prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=settings t=2024-04-24T08:58:11.626871776Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' policy-pap | mariadb (172.17.0.4:3306) open policy-apex-pdp | receive.buffer.bytes = 65536 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.keystore.location = null policy-api | Waiting for policy-db-migrator port 6824... zookeeper | ===> User prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" simulator | overriding logback.xml grafana | logger=settings t=2024-04-24T08:58:11.626881596Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-pap | Waiting for kafka port 9092... policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.keystore.password = null policy-api | policy-db-migrator (172.17.0.7:6824) open zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" simulator | 2024-04-24 08:58:14,232 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=settings t=2024-04-24T08:58:11.626938707Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" mariadb | 2024-04-24 08:58:12+00:00 [Note] [Entrypoint]: Initializing database files policy-pap | kafka (172.17.0.9:9092) open policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.keystore.type = JKS zookeeper | ===> Configuring ... prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" simulator | 2024-04-24 08:58:14,326 INFO org.onap.policy.models.simulators starting grafana | logger=settings t=2024-04-24T08:58:11.626949827Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" mariadb | 2024-04-24 8:58:12 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | Waiting for api port 6969... policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.principal.mapping.rules = DEFAULT zookeeper | ===> Running preflight checks ... prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" simulator | 2024-04-24 08:58:14,326 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties grafana | logger=settings t=2024-04-24T08:58:11.626968477Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" mariadb | 2024-04-24 8:58:12 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | api (172.17.0.8:6969) open policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.protocol = TLSv1.3 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" simulator | 2024-04-24 08:58:14,546 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION grafana | logger=settings t=2024-04-24T08:58:11.627035868Z level=info msg="Config overridden from command line" arg="default.log.mode=console" mariadb | 2024-04-24 8:58:12 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused kafka | ssl.provider = null policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... prometheus | ts=2024-04-24T08:58:10.731Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 simulator | 2024-04-24 08:58:14,548 INFO org.onap.policy.models.simulators starting A&AI simulator grafana | logger=settings t=2024-04-24T08:58:11.627095029Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" mariadb | policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! kafka | ssl.secure.random.implementation = null policy-api | zookeeper | ===> Launching ... prometheus | ts=2024-04-24T08:58:10.732Z caller=main.go:1129 level=info msg="Starting TSDB ..." simulator | 2024-04-24 08:58:14,680 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=settings t=2024-04-24T08:58:11.627113939Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" mariadb | policy-pap | policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | 321 blocks kafka | ssl.trustmanager.algorithm = PKIX policy-api | . ____ _ __ _ _ zookeeper | ===> Launching zookeeper ... prometheus | ts=2024-04-24T08:58:10.737Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 simulator | 2024-04-24 08:58:14,695 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=settings t=2024-04-24T08:58:11.627121409Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! policy-pap | . ____ _ __ _ _ policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | Preparing upgrade release version: 0800 kafka | ssl.truststore.certificates = null policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ zookeeper | [2024-04-24 08:58:18,606] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.737Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 simulator | 2024-04-24 08:58:14,697 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=settings t=2024-04-24T08:58:11.627181871Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" mariadb | To do so, start the server, then issue the following command: policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | Preparing upgrade release version: 0900 kafka | ssl.truststore.location = null policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ zookeeper | [2024-04-24 08:58:18,612] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.738Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" simulator | 2024-04-24 08:58:14,704 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=settings t=2024-04-24T08:58:11.627237722Z level=info msg=Target target=[all] mariadb | policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | Preparing upgrade release version: 1000 kafka | ssl.truststore.password = null policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) zookeeper | [2024-04-24 08:58:18,612] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=25.66µs simulator | 2024-04-24 08:58:14,756 INFO Session workerName=node0 grafana | logger=settings t=2024-04-24T08:58:11.627277093Z level=info msg="Path Home" path=/usr/share/grafana mariadb | '/usr/bin/mysql_secure_installation' policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | Preparing upgrade release version: 1100 kafka | ssl.truststore.type = JKS policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / zookeeper | [2024-04-24 08:58:18,612] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" simulator | 2024-04-24 08:58:15,403 INFO Using GSON for REST calls grafana | logger=settings t=2024-04-24T08:58:11.627333954Z level=info msg="Path Data" path=/var/lib/grafana mariadb | policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-apex-pdp | sasl.login.callback.handler.class = null policy-db-migrator | Preparing upgrade release version: 1200 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-api | =========|_|==============|___/=/_/_/_/ zookeeper | [2024-04-24 08:58:18,612] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 simulator | 2024-04-24 08:58:15,492 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} grafana | logger=settings t=2024-04-24T08:58:11.627418985Z level=info msg="Path Logs" path=/var/log/grafana mariadb | which will also give you the option of removing the test policy-pap | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | sasl.login.class = null policy-db-migrator | Preparing upgrade release version: 1300 kafka | transaction.max.timeout.ms = 900000 policy-api | :: Spring Boot :: (v3.1.10) zookeeper | [2024-04-24 08:58:18,614] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=46.321µs wal_replay_duration=482.4µs wbl_replay_duration=430ns total_replay_duration=686.503µs simulator | 2024-04-24 08:58:15,501 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} grafana | logger=settings t=2024-04-24T08:58:11.627467576Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins mariadb | databases and anonymous user created by default. This is policy-pap | :: Spring Boot :: (v3.1.10) policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | Done kafka | transaction.partition.verification.enable = true policy-api | zookeeper | [2024-04-24 08:58:18,614] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC simulator | 2024-04-24 08:58:15,511 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1759ms grafana | logger=settings t=2024-04-24T08:58:11.627494546Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning mariadb | strongly recommended for production servers. policy-pap | policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | name version kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-api | [2024-04-24T08:58:28.310+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final zookeeper | [2024-04-24 08:58:18,614] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1153 level=info msg="TSDB started" simulator | 2024-04-24 08:58:15,511 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4186 ms. grafana | logger=settings t=2024-04-24T08:58:11.627504316Z level=info msg="App mode production" mariadb | policy-pap | [2024-04-24T08:58:41.675+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | policyadmin 0 kafka | transaction.state.log.load.buffer.size = 5242880 policy-api | [2024-04-24T08:58:28.365+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) zookeeper | [2024-04-24 08:58:18,614] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml simulator | 2024-04-24 08:58:15,515 INFO org.onap.policy.models.simulators starting SDNC simulator grafana | logger=sqlstore t=2024-04-24T08:58:11.628757818Z level=info msg="Connecting to DB" dbtype=sqlite3 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb policy-pap | [2024-04-24T08:58:41.732+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 kafka | transaction.state.log.min.isr = 2 policy-api | [2024-04-24T08:58:28.366+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" zookeeper | [2024-04-24 08:58:18,616] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) prometheus | ts=2024-04-24T08:58:10.743Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.04463ms db_storage=1.54µs remote_storage=2.79µs web_handler=550ns query_engine=1.12µs scrape=290.075µs scrape_sd=127.273µs notify=27.141µs notify_sd=12.14µs rules=2.53µs tracing=4.9µs simulator | 2024-04-24 08:58:15,517 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=sqlstore t=2024-04-24T08:58:11.628842659Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db mariadb | policy-pap | [2024-04-24T08:58:41.733+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | upgrade: 0 -> 1300 kafka | transaction.state.log.num.partitions = 50 policy-api | [2024-04-24T08:58:30.264+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. zookeeper | [2024-04-24 08:58:18,616] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.743Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." simulator | 2024-04-24 08:58:15,517 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-24T08:58:11.631354072Z level=info msg="Starting DB migrations" mariadb | Please report any problems at https://mariadb.org/jira policy-pap | [2024-04-24T08:58:43.621+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | kafka | transaction.state.log.replication.factor = 3 policy-api | [2024-04-24T08:58:30.338+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 66 ms. Found 6 JPA repository interfaces. zookeeper | [2024-04-24 08:58:18,616] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) prometheus | ts=2024-04-24T08:58:10.743Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." simulator | 2024-04-24 08:58:15,522 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-04-24T08:58:11.633260824Z level=info msg="Executing migration" id="create migration_log table" policy-pap | [2024-04-24T08:58:43.708+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 78 ms. Found 7 JPA repository interfaces. policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql kafka | transaction.state.log.segment.bytes = 104857600 policy-api | [2024-04-24T08:58:30.735+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler zookeeper | [2024-04-24 08:58:18,616] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) simulator | 2024-04-24 08:58:15,524 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 grafana | logger=migrator t=2024-04-24T08:58:11.634443364Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.18311ms mariadb | policy-pap | [2024-04-24T08:58:44.134+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | transactional.id.expiration.ms = 604800000 policy-api | [2024-04-24T08:58:30.735+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler zookeeper | [2024-04-24 08:58:18,616] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) simulator | 2024-04-24 08:58:15,547 INFO Session workerName=node0 grafana | logger=migrator t=2024-04-24T08:58:11.638823848Z level=info msg="Executing migration" id="create user table" mariadb | The latest information about MariaDB is available at https://mariadb.org/. policy-pap | [2024-04-24T08:58:44.134+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) kafka | unclean.leader.election.enable = false policy-api | [2024-04-24T08:58:31.395+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) zookeeper | [2024-04-24 08:58:18,616] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) simulator | 2024-04-24 08:58:15,604 INFO Using GSON for REST calls grafana | logger=migrator t=2024-04-24T08:58:11.639454469Z level=info msg="Migration successfully executed" id="create user table" duration=632.441µs mariadb | policy-pap | [2024-04-24T08:58:44.774+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | unstable.api.versions.enable = false policy-api | [2024-04-24T08:58:31.406+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] zookeeper | [2024-04-24 08:58:18,616] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) simulator | 2024-04-24 08:58:15,615 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} grafana | logger=migrator t=2024-04-24T08:58:11.645353769Z level=info msg="Executing migration" id="add unique index user.login" mariadb | Consider joining MariaDB's strong and vibrant community: policy-pap | [2024-04-24T08:58:44.784+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | zookeeper.clientCnxnSocket = null policy-api | [2024-04-24T08:58:31.408+00:00|INFO|StandardService|main] Starting service [Tomcat] zookeeper | [2024-04-24 08:58:18,631] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) simulator | 2024-04-24 08:58:15,618 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} grafana | logger=migrator t=2024-04-24T08:58:11.646159592Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=808.844µs mariadb | https://mariadb.org/get-involved/ policy-pap | [2024-04-24T08:58:44.786+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | kafka | zookeeper.connect = zookeeper:2181 policy-api | [2024-04-24T08:58:31.408+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] zookeeper | [2024-04-24 08:58:18,635] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) simulator | 2024-04-24 08:58:15,618 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1866ms grafana | logger=migrator t=2024-04-24T08:58:11.65016414Z level=info msg="Executing migration" id="add unique index user.email" mariadb | policy-pap | [2024-04-24T08:58:44.786+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | zookeeper.connection.timeout.ms = null policy-api | [2024-04-24T08:58:31.495+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext zookeeper | [2024-04-24 08:58:18,635] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) grafana | logger=migrator t=2024-04-24T08:58:11.651019855Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=854.036µs simulator | 2024-04-24 08:58:15,618 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4900 ms. mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Database files initialized policy-pap | [2024-04-24T08:58:44.880+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | zookeeper.max.in.flight.requests = 10 policy-api | [2024-04-24T08:58:31.495+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3062 ms zookeeper | [2024-04-24 08:58:18,637] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) grafana | logger=migrator t=2024-04-24T08:58:11.654987171Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" simulator | 2024-04-24 08:58:15,619 INFO org.onap.policy.models.simulators starting SO simulator mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Starting temporary server policy-pap | [2024-04-24T08:58:44.880+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3080 ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) kafka | zookeeper.metadata.migration.enable = false policy-api | [2024-04-24T08:58:31.890+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] zookeeper | [2024-04-24 08:58:18,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-24T08:58:11.656031659Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.043698ms simulator | 2024-04-24 08:58:15,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Waiting for server startup policy-pap | [2024-04-24T08:58:45.319+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | zookeeper.metadata.migration.min.batch.size = 200 policy-api | [2024-04-24T08:58:31.967+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final zookeeper | [2024-04-24 08:58:18,649] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-24T08:58:11.65959456Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" simulator | 2024-04-24 08:58:15,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... policy-pap | [2024-04-24T08:58:45.376+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | zookeeper.session.timeout.ms = 18000 policy-api | [2024-04-24T08:58:32.011+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled zookeeper | [2024-04-24 08:58:18,649] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-24T08:58:11.660281241Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=686.171µs simulator | 2024-04-24 08:58:15,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-pap | [2024-04-24T08:58:45.796+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | kafka | zookeeper.set.acl = false zookeeper | [2024-04-24 08:58:18,649] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,626 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Number of transaction pools: 1 policy-pap | [2024-04-24T08:58:45.901+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-04-24T08:58:11.66553615Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" zookeeper | [2024-04-24 08:58:18,649] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,660 INFO Session workerName=node0 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-pap | [2024-04-24T08:58:45.903+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.667923311Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.388011ms grafana | logger=migrator t=2024-04-24T08:58:11.671044343Z level=info msg="Executing migration" id="create user table v2" zookeeper | [2024-04-24 08:58:18,649] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,733 INFO Using GSON for REST calls mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-pap | [2024-04-24T08:58:45.934+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-apex-pdp | security.providers = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-04-24T08:58:11.671916508Z level=info msg="Migration successfully executed" id="create user table v2" duration=871.655µs grafana | logger=migrator t=2024-04-24T08:58:11.675094542Z level=info msg="Executing migration" id="create index UQE_user_login - v2" zookeeper | [2024-04-24 08:58:18,649] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,748 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} mariadb | 2024-04-24 8:58:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | [2024-04-24T08:58:47.521+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | -------------- kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-04-24T08:58:11.676253222Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.15614ms policy-api | [2024-04-24T08:58:32.290+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer zookeeper | [2024-04-24 08:58:18,649] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,749 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} mariadb | 2024-04-24 8:58:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | [2024-04-24T08:58:47.531+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-04-24T08:58:11.681934798Z level=info msg="Executing migration" id="create index UQE_user_email - v2" policy-api | [2024-04-24T08:58:32.320+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... zookeeper | [2024-04-24 08:58:18,649] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,749 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1997ms mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-pap | [2024-04-24T08:58:48.070+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-04-24T08:58:11.683879231Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.944803ms policy-api | [2024-04-24T08:58:32.409+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f0b3cfe zookeeper | [2024-04-24 08:58:18,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,749 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4876 ms. policy-pap | [2024-04-24T08:58:48.508+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-04-24T08:58:11.687288888Z level=info msg="Executing migration" id="copy data_source v1 to v2" policy-api | [2024-04-24T08:58:32.410+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,750 INFO org.onap.policy.models.simulators starting VFC simulator mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Completed initialization of buffer pool policy-pap | [2024-04-24T08:58:48.629+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-04-24T08:58:11.687622324Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=333.136µs policy-api | [2024-04-24T08:58:34.497+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:host.name=3a03b3ec39eb (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,753 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-pap | [2024-04-24T08:58:48.894+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-04-24T08:58:11.689885172Z level=info msg="Executing migration" id="Drop old table user_v1" policy-api | [2024-04-24T08:58:34.500+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,753 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: 128 rollback segments are active. policy-pap | allow.auto.create.topics = true policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-04-24T08:58:11.690436732Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=551.41µs policy-api | [2024-04-24T08:58:35.559+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,754 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-04-24T08:58:11.695478357Z level=info msg="Executing migration" id="Add column help_flags1 to user table" policy-api | [2024-04-24T08:58:36.348+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-04-24 08:58:15,755 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | ssl.key.password = null policy-db-migrator | kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-04-24T08:58:11.696602416Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.122499ms policy-api | [2024-04-24T08:58:37.404+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning simulator | 2024-04-24 08:58:15,767 INFO Session workerName=node0 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: log sequence number 46590; transaction id 14 policy-pap | auto.offset.reset = latest policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-04-24T08:58:11.699694668Z level=info msg="Executing migration" id="Update user table charset" policy-api | [2024-04-24T08:58:37.652+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a0b88e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1b404a21, org.springframework.security.web.context.SecurityContextHolderFilter@3c6c7782, org.springframework.security.web.header.HeaderWriterFilter@1cdb4bd3, org.springframework.security.web.authentication.logout.LogoutFilter@452d71e5, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@66a2bc61, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@739e76e6, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@27153ba2, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@280aa1bd, org.springframework.security.web.access.ExceptionTranslationFilter@782b12c9, org.springframework.security.web.access.intercept.AuthorizationFilter@23639e5] simulator | 2024-04-24 08:58:15,831 INFO Using GSON for REST calls zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 8:58:14 0 [Note] Plugin 'FEEDBACK' is disabled. policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-04-24T08:58:11.699724669Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.75µs policy-api | [2024-04-24T08:58:38.506+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' simulator | 2024-04-24 08:58:15,840 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 8:58:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | check.crcs = true policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) kafka | zookeeper.ssl.truststore.type = null grafana | logger=migrator t=2024-04-24T08:58:11.702219291Z level=info msg="Executing migration" id="Add last_seen_at column to user" simulator | 2024-04-24 08:58:15,841 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 8:58:14 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | -------------- kafka | (kafka.server.KafkaConfig) grafana | logger=migrator t=2024-04-24T08:58:11.70334063Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.123769ms simulator | 2024-04-24 08:58:15,841 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2089ms simulator | 2024-04-24 08:58:15,841 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4913 ms. mariadb | 2024-04-24 8:58:14 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. policy-pap | client.id = consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-1 policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | kafka | [2024-04-24 08:58:22,609] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-24T08:58:11.706529674Z level=info msg="Executing migration" id="Add missing user data" zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd: ready for connections. policy-pap | client.rack = policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | kafka | [2024-04-24 08:58:22,609] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-24T08:58:11.706860619Z level=info msg="Migration successfully executed" id="Add missing user data" duration=330.765µs policy-api | [2024-04-24T08:58:38.603+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] simulator | 2024-04-24 08:58:15,842 INFO org.onap.policy.models.simulators started zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql kafka | [2024-04-24 08:58:22,614] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-04-24T08:58:11.735703738Z level=info msg="Executing migration" id="Add is_disabled column to user" policy-api | [2024-04-24T08:58:38.624+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 08:58:15+00:00 [Note] [Entrypoint]: Temporary server started. policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:22,617] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-api | [2024-04-24T08:58:38.642+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.993 seconds (process running for 11.577) grafana | logger=migrator t=2024-04-24T08:58:11.737297535Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.593237ms zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: Creating user policy_user policy-pap | enable.auto.commit = true policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) kafka | [2024-04-24 08:58:22,648] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-api | [2024-04-24T08:58:39.932+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-24T08:58:11.740488758Z level=info msg="Executing migration" id="Add index user.login/user.email" zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) policy-pap | exclude.internal.topics = true policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | [2024-04-24 08:58:22,655] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-api | [2024-04-24T08:58:39.933+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-04-24T08:58:11.741381624Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=893.046µs zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) mariadb | policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | kafka | [2024-04-24 08:58:22,665] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) policy-api | [2024-04-24T08:58:39.934+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms grafana | logger=migrator t=2024-04-24T08:58:11.744664039Z level=info msg="Executing migration" id="Add is_service_account column to user" zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) mariadb | policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | kafka | [2024-04-24 08:58:22,666] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-api | [2024-04-24T08:58:58.038+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-04-24T08:58:11.74591859Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.253991ms zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 08:58:17+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf policy-pap | fetch.min.bytes = 1 policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql kafka | [2024-04-24 08:58:22,667] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-api | [] grafana | logger=migrator t=2024-04-24T08:58:11.748992533Z level=info msg="Executing migration" id="Update is_service_account column to nullable" zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh policy-pap | group.id = c2598a93-7b5f-4e4e-b23a-b864ffd9a18a policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | [2024-04-24 08:58:22,677] INFO Starting the log cleaner (kafka.log.LogCleaner) grafana | logger=migrator t=2024-04-24T08:58:11.758023385Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.030462ms zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:22,721] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) grafana | logger=migrator t=2024-04-24T08:58:11.762813136Z level=info msg="Executing migration" id="Add uid column to user" zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | #!/bin/bash -xv policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:22,735] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) grafana | logger=migrator t=2024-04-24T08:58:11.764260371Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.446445ms zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. policy-db-migrator | kafka | [2024-04-24 08:58:22,747] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-04-24T08:58:11.767525426Z level=info msg="Executing migration" id="Update uid column values for users" zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.284+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | interceptor.classes = [] mariadb | # policy-db-migrator | kafka | [2024-04-24 08:58:22,783] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-04-24T08:58:11.76777478Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=249.144µs zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.285+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | internal.leave.group.on.close = true mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql kafka | [2024-04-24 08:58:23,089] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-04-24T08:58:11.770148151Z level=info msg="Executing migration" id="Add unique index user_uid" zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.285+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132283 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false mariadb | # you may not use this file except in compliance with the License. policy-db-migrator | -------------- kafka | [2024-04-24 08:58:23,109] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-24T08:58:11.770905944Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=757.513µs zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.287+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-1, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Subscribed to topic(s): policy-pdp-pap policy-pap | isolation.level = read_uncommitted mariadb | # You may obtain a copy of the License at policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | [2024-04-24 08:58:23,109] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-04-24T08:58:11.774170689Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" zookeeper | [2024-04-24 08:58:18,653] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) policy-apex-pdp | [2024-04-24T08:58:52.300+00:00|INFO|ServiceManager|main] service manager starting policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | # policy-db-migrator | -------------- kafka | [2024-04-24 08:58:23,114] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-24T08:58:11.774515735Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=347.557µs zookeeper | [2024-04-24 08:58:18,654] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.301+00:00|INFO|ServiceManager|main] service manager starting topics policy-pap | max.partition.fetch.bytes = 1048576 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.779282765Z level=info msg="Executing migration" id="create temp user table v1-7" kafka | [2024-04-24 08:58:23,118] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) zookeeper | [2024-04-24 08:58:18,654] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | [2024-04-24T08:58:52.303+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-pap | max.poll.interval.ms = 300000 mariadb | # policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.780108629Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=825.084µs kafka | [2024-04-24 08:58:23,141] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) zookeeper | [2024-04-24 08:58:18,655] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-apex-pdp | [2024-04-24T08:58:52.322+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | max.poll.records = 500 mariadb | # Unless required by applicable law or agreed to in writing, software policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql grafana | logger=migrator t=2024-04-24T08:58:11.783267512Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" kafka | [2024-04-24 08:58:23,142] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) zookeeper | [2024-04-24 08:58:18,655] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) policy-apex-pdp | allow.auto.create.topics = true policy-pap | metadata.max.age.ms = 300000 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.784013436Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=744.174µs kafka | [2024-04-24 08:58:23,145] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | auto.commit.interval.ms = 5000 policy-pap | metric.reporters = [] mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.787202629Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" kafka | [2024-04-24 08:58:23,147] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | metrics.num.samples = 2 mariadb | # See the License for the specific language governing permissions and policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.788044013Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=841.574µs kafka | [2024-04-24 08:58:23,148] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | auto.offset.reset = latest policy-pap | metrics.recording.level = INFO mariadb | # limitations under the License. policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.793184601Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" kafka | [2024-04-24 08:58:23,161] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | metrics.sample.window.ms = 30000 mariadb | policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.793952714Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=767.783µs kafka | [2024-04-24 08:58:23,162] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-apex-pdp | check.crcs = true policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] zookeeper | [2024-04-24 08:58:18,657] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql grafana | logger=migrator t=2024-04-24T08:58:11.796912174Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" kafka | [2024-04-24 08:58:23,181] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | receive.buffer.bytes = 65536 zookeeper | [2024-04-24 08:58:18,657] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) mariadb | do policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.797683637Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=771.343µs kafka | [2024-04-24 08:58:23,206] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713949103193,1713949103193,1,0,0,72057610827661313,258,0,27 policy-apex-pdp | client.id = consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2 policy-pap | reconnect.backoff.max.ms = 1000 zookeeper | [2024-04-24 08:58:18,659] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.801732625Z level=info msg="Executing migration" id="Update temp_user table charset" kafka | (kafka.zk.KafkaZkClient) policy-apex-pdp | client.rack = policy-pap | reconnect.backoff.ms = 50 zookeeper | [2024-04-24 08:58:18,659] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.801759006Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.471µs kafka | [2024-04-24 08:58:23,207] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) zookeeper | [2024-04-24 08:58:18,660] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-apex-pdp | connections.max.idle.ms = 540000 mariadb | done policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.806645918Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" kafka | [2024-04-24 08:58:23,258] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-pap | request.timeout.ms = 30000 zookeeper | [2024-04-24 08:58:18,660] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-apex-pdp | default.api.timeout.ms = 60000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.807782398Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.13719ms kafka | [2024-04-24 08:58:23,264] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | retry.backoff.ms = 100 zookeeper | [2024-04-24 08:58:18,660] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | enable.auto.commit = true mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql grafana | logger=migrator t=2024-04-24T08:58:11.811128514Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" kafka | [2024-04-24 08:58:23,270] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.client.callback.handler.class = null zookeeper | [2024-04-24 08:58:18,681] INFO Logging initialized @586ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-apex-pdp | exclude.internal.topics = true mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.812322484Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.19445ms kafka | [2024-04-24 08:58:23,271] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.jaas.config = null zookeeper | [2024-04-24 08:58:18,785] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-apex-pdp | fetch.max.bytes = 52428800 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.815874025Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" kafka | [2024-04-24 08:58:23,279] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit zookeeper | [2024-04-24 08:58:18,785] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-apex-pdp | fetch.max.wait.ms = 500 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.81737906Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.504886ms kafka | [2024-04-24 08:58:23,284] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 zookeeper | [2024-04-24 08:58:18,813] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) policy-apex-pdp | fetch.min.bytes = 1 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.823326451Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" kafka | [2024-04-24 08:58:23,288] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | sasl.kerberos.service.name = null zookeeper | [2024-04-24 08:58:18,849] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-apex-pdp | group.id = 6c14929a-34c8-48a0-adf2-d542a07b4ce8 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.824597643Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.275032ms kafka | [2024-04-24 08:58:23,289] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 zookeeper | [2024-04-24 08:58:18,849] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-apex-pdp | group.instance.id = null mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql grafana | logger=migrator t=2024-04-24T08:58:11.827854697Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" kafka | [2024-04-24 08:58:23,292] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 zookeeper | [2024-04-24 08:58:18,850] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-apex-pdp | heartbeat.interval.ms = 3000 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.833021235Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.166158ms kafka | [2024-04-24 08:58:23,295] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | sasl.login.callback.handler.class = null zookeeper | [2024-04-24 08:58:18,856] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-apex-pdp | interceptor.classes = [] mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.836369681Z level=info msg="Executing migration" id="create temp_user v2" kafka | [2024-04-24 08:58:23,306] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.login.class = null zookeeper | [2024-04-24 08:58:18,866] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-apex-pdp | internal.leave.group.on.close = true mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.837503261Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.13275ms kafka | [2024-04-24 08:58:23,309] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | sasl.login.connect.timeout.ms = null zookeeper | [2024-04-24 08:58:18,877] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.842861462Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" kafka | [2024-04-24 08:58:23,309] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.login.read.timeout.ms = null zookeeper | [2024-04-24 08:58:18,878] INFO Started @783ms (org.eclipse.jetty.server.Server) policy-apex-pdp | isolation.level = read_uncommitted mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.843617684Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=757.442µs kafka | [2024-04-24 08:58:23,320] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | sasl.login.refresh.buffer.seconds = 300 zookeeper | [2024-04-24 08:58:18,878] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql grafana | logger=migrator t=2024-04-24T08:58:11.846449822Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" kafka | [2024-04-24 08:58:23,320] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.min.period.seconds = 60 zookeeper | [2024-04-24 08:58:18,881] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-apex-pdp | max.partition.fetch.bytes = 1048576 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.847167464Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=716.662µs kafka | [2024-04-24 08:58:23,326] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.window.factor = 0.8 zookeeper | [2024-04-24 08:58:18,882] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-apex-pdp | max.poll.interval.ms = 300000 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.850019893Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" kafka | [2024-04-24 08:58:23,330] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,884] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-apex-pdp | max.poll.records = 500 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.850723625Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=703.912µs kafka | [2024-04-24 08:58:23,333] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.window.jitter = 0.05 zookeeper | [2024-04-24 08:58:18,885] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-apex-pdp | metadata.max.age.ms = 300000 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.856987291Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" kafka | [2024-04-24 08:58:23,340] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.login.retry.backoff.max.ms = 10000 zookeeper | [2024-04-24 08:58:18,899] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-apex-pdp | metric.reporters = [] mariadb | policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.857810684Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=822.333µs kafka | [2024-04-24 08:58:23,352] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-pap | sasl.login.retry.backoff.ms = 100 zookeeper | [2024-04-24 08:58:18,899] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" grafana | logger=migrator t=2024-04-24T08:58:11.869711076Z level=info msg="Executing migration" id="copy temp_user v1 to v2" kafka | [2024-04-24 08:58:23,357] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-pap | sasl.mechanism = GSSAPI zookeeper | [2024-04-24 08:58:18,901] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' grafana | logger=migrator t=2024-04-24T08:58:11.870970167Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=1.258701ms kafka | [2024-04-24 08:58:23,362] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 zookeeper | [2024-04-24 08:58:18,901] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql grafana | logger=migrator t=2024-04-24T08:58:11.874355485Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" kafka | [2024-04-24 08:58:23,364] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-pap | sasl.oauthbearer.expected.audience = null zookeeper | [2024-04-24 08:58:18,906] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp grafana | logger=migrator t=2024-04-24T08:58:11.875317251Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=961.146µs kafka | [2024-04-24 08:58:23,404] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) zookeeper | [2024-04-24 08:58:18,906] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-apex-pdp | receive.buffer.bytes = 65536 policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | mariadb | kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,909] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) grafana | logger=migrator t=2024-04-24T08:58:11.879472231Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: Stopping temporary server kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,910] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) grafana | logger=migrator t=2024-04-24T08:58:11.880154143Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=681.902µs policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd (initiated by: unknown): Normal shutdown kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,911] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-04-24T08:58:11.883397018Z level=info msg="Executing migration" id="create star table" policy-apex-pdp | request.timeout.ms = 30000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: FTS optimize thread exiting. kafka | [2024-04-24 08:58:23,407] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,920] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) grafana | logger=migrator t=2024-04-24T08:58:11.88411827Z level=info msg="Migration successfully executed" id="create star table" duration=720.672µs policy-apex-pdp | retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Starting shutdown... kafka | [2024-04-24 08:58:23,411] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,921] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) grafana | logger=migrator t=2024-04-24T08:58:11.889513451Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool kafka | [2024-04-24 08:58:23,412] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,935] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) grafana | logger=migrator t=2024-04-24T08:58:11.890327824Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=819.013µs policy-apex-pdp | sasl.jaas.config = null policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Buffer pool(s) dump completed at 240424 8:58:18 kafka | [2024-04-24 08:58:23,412] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) zookeeper | [2024-04-24 08:58:18,936] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) grafana | logger=migrator t=2024-04-24T08:58:11.89475801Z level=info msg="Executing migration" id="create org table v1" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" kafka | [2024-04-24 08:58:23,412] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) zookeeper | [2024-04-24 08:58:21,055] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=migrator t=2024-04-24T08:58:11.895464532Z level=info msg="Migration successfully executed" id="create org table v1" duration=699.662µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Shutdown completed; log sequence number 328945; transaction id 298 kafka | [2024-04-24 08:58:23,414] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) grafana | logger=migrator t=2024-04-24T08:58:11.898430482Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | security.providers = null policy-db-migrator | -------------- mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: Shutdown complete kafka | [2024-04-24 08:58:23,417] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:11.899875726Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.445074ms policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | kafka | [2024-04-24 08:58:23,421] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) grafana | logger=migrator t=2024-04-24T08:58:11.903506368Z level=info msg="Executing migration" id="create org_user table v1" policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: Temporary server stopped kafka | [2024-04-24 08:58:23,424] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-04-24T08:58:11.904488085Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=983.348µs policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | mariadb | kafka | [2024-04-24 08:58:23,427] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-24T08:58:11.909596341Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" policy-apex-pdp | sasl.login.class = null policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. kafka | [2024-04-24 08:58:23,428] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-04-24T08:58:11.910313793Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=717.652µs grafana | logger=migrator t=2024-04-24T08:58:11.913287443Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql mariadb | kafka | [2024-04-24 08:58:23,428] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-24T08:58:11.913945084Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=657.091µs policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:23,431] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-24T08:58:11.919065691Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:23,432] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-04-24T08:58:11.920470414Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.408763ms grafana | logger=migrator t=2024-04-24T08:58:11.92376167Z level=info msg="Executing migration" id="Update org table charset" policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:23,433] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-24T08:58:11.923806111Z level=info msg="Migration successfully executed" id="Update org table charset" duration=45.511µs policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Number of transaction pools: 1 policy-db-migrator | kafka | [2024-04-24 08:58:23,434] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-24T08:58:11.934678966Z level=info msg="Executing migration" id="Update org_user table charset" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-04-24T08:58:11.934711846Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.28µs kafka | [2024-04-24 08:58:23,438] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-04-24T08:58:11.939558878Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" kafka | [2024-04-24 08:58:23,438] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-apex-pdp | sasl.login.retry.backoff.ms = 100 mariadb | 2024-04-24 8:58:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-db-migrator | -------------- policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-04-24T08:58:11.939729641Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=171.033µs kafka | [2024-04-24 08:58:23,439] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | sasl.mechanism = GSSAPI mariadb | 2024-04-24 8:58:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-04-24T08:58:11.94380457Z level=info msg="Executing migration" id="create dashboard table" kafka | [2024-04-24 08:58:23,439] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-04-24T08:58:11.944974009Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.168709ms kafka | [2024-04-24 08:58:23,439] INFO Kafka startTimeMs: 1713949103434 (org.apache.kafka.common.utils.AppInfoParser) policy-apex-pdp | sasl.oauthbearer.expected.audience = null mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Completed initialization of buffer pool policy-db-migrator | policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-04-24T08:58:11.94790509Z level=info msg="Executing migration" id="add index dashboard.account_id" kafka | [2024-04-24 08:58:23,440] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-pap | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.949185Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.27889ms kafka | [2024-04-24 08:58:23,441] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: 128 rollback segments are active. policy-pap | ssl.keystore.password = null policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql grafana | logger=migrator t=2024-04-24T08:58:11.952327794Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" kafka | [2024-04-24 08:58:23,445] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.953116207Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=788.113µs kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.956000176Z level=info msg="Executing migration" id="create dashboard_tag table" kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: log sequence number 328945; transaction id 299 policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.956602796Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=602.66µs kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope mariadb | 2024-04-24 8:58:18 0 [Note] Plugin 'FEEDBACK' is disabled. policy-pap | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.962952414Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" kafka | [2024-04-24 08:58:23,447] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.964571061Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.617687ms kafka | [2024-04-24 08:58:23,489] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null mariadb | 2024-04-24 8:58:18 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | ssl.truststore.certificates = null policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql grafana | logger=migrator t=2024-04-24T08:58:11.969927372Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" kafka | [2024-04-24 08:58:23,546] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | security.protocol = PLAINTEXT mariadb | 2024-04-24 8:58:18 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.97096133Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.033848ms kafka | [2024-04-24 08:58:23,546] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-apex-pdp | security.providers = null mariadb | 2024-04-24 8:58:18 0 [Note] Server socket created on IP: '0.0.0.0'. policy-pap | ssl.truststore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.975933364Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" kafka | [2024-04-24 08:58:23,578] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) mariadb | 2024-04-24 8:58:18 0 [Note] Server socket created on IP: '::'. policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.982065017Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.132273ms policy-apex-pdp | send.buffer.bytes = 131072 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: ready for connections. policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.987392238Z level=info msg="Executing migration" id="create dashboard v2" policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:11.988242542Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=850.535µs kafka | [2024-04-24 08:58:28,580] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-24T08:58:11.990700234Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" kafka | [2024-04-24 08:58:28,580] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-pap | policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-24T08:58:11.991393546Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=694.111µs kafka | [2024-04-24 08:58:51,135] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Buffer pool(s) load completed at 240424 8:58:18 policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:11.9963784Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" kafka | [2024-04-24 08:58:51,135] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) mariadb | 2024-04-24 8:58:19 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949129055 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-04-24T08:58:11.997238324Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=860.424µs kafka | [2024-04-24 08:58:51,139] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) mariadb | 2024-04-24 8:58:19 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-pap | [2024-04-24T08:58:49.059+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-1, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-04-24T08:58:49.060+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.004176152Z level=info msg="Executing migration" id="copy dashboard v1 to v2" kafka | [2024-04-24 08:58:51,142] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) mariadb | 2024-04-24 8:58:19 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.004513298Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=338.396µs mariadb | 2024-04-24 8:58:19 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) kafka | [2024-04-24 08:58:51,169] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UfYjnzzkRPeYang4gRgPIg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(3d7pexomSuav55xzl5U12w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.009204439Z level=info msg="Executing migration" id="drop table dashboard_v1" kafka | [2024-04-24 08:58:51,170] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql grafana | logger=migrator t=2024-04-24T08:58:12.010040016Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=835.477µs kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.015211385Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.rack = policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:12.015309897Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=99.031µs kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.019025648Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.021999254Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.973826ms kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | enable.auto.commit = true policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.025603634Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | exclude.internal.topics = true policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.027442938Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.839604ms policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.030610159Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.032423563Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.813414ms policy-pap | fetch.min.bytes = 1 policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.037418659Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" policy-pap | group.id = policy-pap policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.038206694Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=788.275µs policy-pap | group.instance.id = null policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.041457735Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | ssl.provider = null policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.044135667Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.674212ms policy-pap | interceptor.classes = [] policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.050975777Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-pap | internal.leave.group.on.close = true policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.051990546Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.016059ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.055068186Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-pap | isolation.level = read_uncommitted policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.05631856Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.251214ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.06420606Z level=info msg="Executing migration" id="Update dashboard table charset" policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.064286681Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=80.221µs policy-pap | max.poll.interval.ms = 300000 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.068008402Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | max.poll.records = 500 policy-apex-pdp | policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.068058923Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=48.781µs policy-pap | metadata.max.age.ms = 300000 policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.071493848Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | metric.reporters = [] policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.075022486Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.525337ms policy-pap | metrics.num.samples = 2 policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132331 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.081549701Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | metrics.recording.level = INFO policy-apex-pdp | [2024-04-24T08:58:52.332+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.083799613Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.251562ms grafana | logger=migrator t=2024-04-24T08:58:12.089687735Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-apex-pdp | [2024-04-24T08:58:52.332+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6f6498b9-feed-4855-a99d-511b9662bd01, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-24T08:58:12.092191544Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.503718ms policy-apex-pdp | [2024-04-24T08:58:52.345+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-24T08:58:12.131273659Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-apex-pdp | acks = -1 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-24T08:58:12.137211302Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=5.983694ms policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-24T08:58:12.141001194Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-apex-pdp | batch.size = 16384 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-24T08:58:12.14127771Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=277.156µs policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-24T08:58:12.145644973Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-apex-pdp | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-24T08:58:12.146502999Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=857.046µs policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-24T08:58:12.151696278Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-apex-pdp | client.id = producer-1 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-24T08:58:12.152382561Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=690.303µs policy-apex-pdp | compression.type = none policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-24T08:58:12.155774796Z level=info msg="Executing migration" id="Update dashboard title length" policy-apex-pdp | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-24T08:58:12.155807057Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=33.021µs policy-apex-pdp | delivery.timeout.ms = 120000 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-24T08:58:12.163037584Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:12.165112224Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=2.07862ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.16858704Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-apex-pdp | linger.ms = 0 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.169299114Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=712.224µs policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | max.block.ms = 60000 policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.172805941Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.178110611Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.30252ms policy-pap | sasl.login.class = null policy-apex-pdp | max.request.size = 1048576 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.182681869Z level=info msg="Executing migration" id="create dashboard_provisioning v2" policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.183396333Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=714.784µs policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.187538102Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | metric.reporters = [] policy-db-migrator | kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.188322747Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=782.714µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.192363834Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.193370343Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.006069ms policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.19897894Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.199299906Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=320.876µs policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.20318413Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | partitioner.class = null policy-db-migrator | kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.203671869Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=490.709µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | partitioner.ignore.keys = false policy-db-migrator | kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.207245237Z level=info msg="Executing migration" id="Add check_sum column" policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.210690693Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.447296ms policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,178] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.214810272Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | [2024-04-24 08:58:51,178] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.215562086Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=752.784µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.220874438Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | retries = 2147483647 policy-db-migrator | kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.221102992Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=228.724µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.224573248Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.224814962Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=242.904µs policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.228437051Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.229189766Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=752.715µs policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.233388616Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-pap | security.providers = null policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.236633538Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.243072ms policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.243152002Z level=info msg="Executing migration" id="create data_source table" policy-pap | session.timeout.ms = 45000 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.244018969Z level=info msg="Migration successfully executed" id="create data_source table" duration=867.847µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | sasl.login.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.254455678Z level=info msg="Executing migration" id="add index data_source.account_id" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | sasl.login.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.25563959Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.182972ms policy-pap | ssl.cipher.suites = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.261853239Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.262730335Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=876.046µs kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.266307343Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-04-24T08:58:12.267090929Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=783.746µs kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.key.password = null policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.272126385Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-04-24T08:58:12.27291761Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=794.745µs kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.278772331Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.287836174Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.064803ms kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.293585024Z level=info msg="Executing migration" id="create data_source table v2" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-04-24T08:58:12.295158354Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.57362ms kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.299303434Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-24T08:58:12.30015533Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=853.327µs kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.provider = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.306032071Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.30696869Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=936.399µs kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.31115555Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-04-24T08:58:12.31175833Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=600.31µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.location = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.317201494Z level=info msg="Executing migration" id="Add column with_credentials" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.password = null policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) grafana | logger=migrator t=2024-04-24T08:58:12.321524737Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.322092ms kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.326046533Z level=info msg="Executing migration" id="Add secure json data column" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-24T08:58:12.328432558Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.385705ms kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | policy-db-migrator | policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-04-24T08:58:12.331932385Z level=info msg="Executing migration" id="Update data_source table charset" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-apex-pdp | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-24T08:58:12.331991826Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=59.301µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-04-24T08:58:12.336121465Z level=info msg="Executing migration" id="Update initial version to 1" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949129066 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-04-24T08:58:12.336328619Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=206.774µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-04-24T08:58:12.34209997Z level=info msg="Executing migration" id="Add read_only data column" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-04-24T08:58:49.385+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-24T08:58:12.346271559Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.168839ms kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | [2024-04-24T08:58:49.525+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-04-24T08:58:12.350924717Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0470-pdp.sql policy-apex-pdp | ssl.engine.factory.class = null policy-pap | [2024-04-24T08:58:49.770+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@8bde368, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5065bdac, org.springframework.security.web.context.SecurityContextHolderFilter@6fc6f68f, org.springframework.security.web.header.HeaderWriterFilter@60b4d934, org.springframework.security.web.authentication.logout.LogoutFilter@441016d6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3be369fc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@30437e9c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@762f8ff6, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2e9dcdd3, org.springframework.security.web.access.ExceptionTranslationFilter@2435c6ae, org.springframework.security.web.access.intercept.AuthorizationFilter@4e26040f] grafana | logger=migrator t=2024-04-24T08:58:12.351284545Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=359.658µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | ssl.key.password = null policy-pap | [2024-04-24T08:58:50.497+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' grafana | logger=migrator t=2024-04-24T08:58:12.355871142Z level=info msg="Executing migration" id="Update json_data with nulls" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | [2024-04-24T08:58:50.590+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-04-24T08:58:12.356125407Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=252.305µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | [2024-04-24T08:58:50.607+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' grafana | logger=migrator t=2024-04-24T08:58:12.360135273Z level=info msg="Executing migration" id="Add uid column" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | ssl.keystore.key = null policy-pap | [2024-04-24T08:58:50.623+00:00|INFO|ServiceManager|main] Policy PAP starting grafana | logger=migrator t=2024-04-24T08:58:12.362556549Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.419966ms kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | ssl.keystore.location = null policy-pap | [2024-04-24T08:58:50.623+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry grafana | logger=migrator t=2024-04-24T08:58:12.36729837Z level=info msg="Executing migration" id="Update uid value" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-apex-pdp | ssl.keystore.password = null policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters grafana | logger=migrator t=2024-04-24T08:58:12.367580775Z level=info msg="Migration successfully executed" id="Update uid value" duration=281.905µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.type = JKS policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-04-24T08:58:12.370123504Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher grafana | logger=migrator t=2024-04-24T08:58:12.371103192Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=979.938µs kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | ssl.provider = null policy-pap | [2024-04-24T08:58:50.625+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher grafana | logger=migrator t=2024-04-24T08:58:12.375912184Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | [2024-04-24T08:58:50.625+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher grafana | logger=migrator t=2024-04-24T08:58:12.377457803Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.545399ms kafka | [2024-04-24 08:58:51,180] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | [2024-04-24T08:58:50.627+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@716eae1 grafana | logger=migrator t=2024-04-24T08:58:12.38307253Z level=info msg="Executing migration" id="create api_key table" kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-apex-pdp | ssl.truststore.certificates = null policy-pap | [2024-04-24T08:58:50.638+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-24T08:58:12.383927547Z level=info msg="Migration successfully executed" id="create api_key table" duration=852.326µs kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.location = null policy-pap | [2024-04-24T08:58:50.639+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-24T08:58:12.388781259Z level=info msg="Executing migration" id="add index api_key.account_id" policy-apex-pdp | ssl.truststore.password = null policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.389626296Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=844.777µs policy-apex-pdp | ssl.truststore.type = JKS policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.393093082Z level=info msg="Executing migration" id="add index api_key.key" policy-apex-pdp | transaction.timeout.ms = 60000 policy-pap | auto.offset.reset = latest policy-db-migrator | kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.393917808Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=824.676µs policy-apex-pdp | transactional.id = null policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.399680237Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | check.crcs = true policy-db-migrator | > upgrade 0500-pdpsubgroup.sql kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.401168496Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.486709ms policy-apex-pdp | policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.40558511Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-apex-pdp | [2024-04-24T08:58:52.355+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | client.id = consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.407052479Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.465507ms policy-apex-pdp | [2024-04-24T08:58:52.376+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | client.rack = policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.410165357Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-apex-pdp | [2024-04-24T08:58:52.376+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.411159446Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=994.149µs policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132376 policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.417511507Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6f6498b9-feed-4855-a99d-511b9662bd01, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | enable.auto.commit = true policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.418892073Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.380086ms policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|ServiceManager|main] service manager starting set alive policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.423746996Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-apex-pdp | [2024-04-24T08:58:52.378+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.430708769Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.961943ms policy-apex-pdp | [2024-04-24T08:58:52.379+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.434852638Z level=info msg="Executing migration" id="create api_key table v2" policy-apex-pdp | [2024-04-24T08:58:52.380+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-pap | fetch.min.bytes = 1 policy-db-migrator | kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.435754886Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=902.977µs policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-pap | group.id = c2598a93-7b5f-4e4e-b23a-b864ffd9a18a policy-db-migrator | kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.441101657Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-pap | group.instance.id = null policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.441915823Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=812.876µs policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.444812058Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 policy-pap | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.445595134Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=782.825µs policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.450591219Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.451359533Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=767.904µs policy-apex-pdp | [2024-04-24T08:58:52.410+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-pap | isolation.level = read_uncommitted policy-db-migrator | kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.454607455Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-apex-pdp | [] policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.455109514Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=502.23µs policy-apex-pdp | [2024-04-24T08:58:52.413+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.459811754Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c332eed1-33b5-4f1c-8b3b-05ac50842ecd","timestampMs":1713949132390,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.46064783Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=835.606µs policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-pap | max.poll.records = 500 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.465067264Z level=info msg="Executing migration" id="Update api_key table charset" policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.465095745Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.001µs policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-pap | metric.reporters = [] policy-db-migrator | kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.469643422Z level=info msg="Executing migration" id="Add expires to api_key table" policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | metrics.num.samples = 2 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.47323171Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.586048ms policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ServiceManager|main] service manager started policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.476573764Z level=info msg="Executing migration" id="Add service account foreign key" policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ServiceManager|main] service manager started policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.480510779Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.935874ms policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.513881525Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-pap | receive.buffer.bytes = 65536 policy-apex-pdp | [2024-04-24T08:58:52.635+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.514185902Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=308.747µs policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | [2024-04-24T08:58:52.800+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-db-migrator | kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.517468644Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | [2024-04-24T08:58:52.800+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.521408979Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.940146ms policy-pap | request.timeout.ms = 30000 policy-apex-pdp | [2024-04-24T08:58:52.802+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.528113106Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-pap | retry.backoff.ms = 100 policy-apex-pdp | [2024-04-24T08:58:52.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.530608724Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.494938ms policy-pap | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-04-24T08:58:52.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] (Re-)joining group policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.jaas.config = null policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Request joining group due to: need to re-join with the given member-id: consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.535335985Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.536057349Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=719.953µs kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] (Re-)joining group policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-04-24T08:58:12.541647975Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:53.250+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.542523052Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=874.457µs policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:53.250+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:12.547271742Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.837+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.548642049Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.369846ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-24 08:58:51,325] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.847+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Finished assignment for group at generation 1: {consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.554266346Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-24 08:58:51,325] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.55499041Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=726.574µs policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-04-24T08:58:12.55759946Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-pap | sasl.login.class = null kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.857+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.5587086Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.10443ms policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.865+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-04-24T08:58:12.562673106Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-04-24T08:58:55.874+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.564368388Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.699642ms policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-04-24T08:58:56.153+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.2 - policyadmin [24/Apr/2024:08:58:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.571187478Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-04-24T08:59:12.385+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.57128533Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=98.422µs policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | > upgrade 0580-toscadatatypes.sql grafana | logger=migrator t=2024-04-24T08:58:12.575161984Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-04-24T08:59:12.405+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.575203915Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=43.991µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-24T08:58:12.579271522Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-24T08:58:12.582970753Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.700251ms policy-apex-pdp | [2024-04-24T08:59:12.408+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-24T08:58:12.585765446Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-apex-pdp | [2024-04-24T08:59:12.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-24T08:58:12.589454307Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.687901ms policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-24T08:58:12.595740186Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-apex-pdp | [2024-04-24T08:59:12.545+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-24T08:58:12.595804038Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.542µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-24T08:58:12.60012433Z level=info msg="Executing migration" id="create quota table v1" policy-apex-pdp | [2024-04-24T08:59:12.545+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-24T08:58:12.601250602Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.128202ms policy-apex-pdp | [2024-04-24T08:59:12.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-24T08:58:12.605522343Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-24T08:58:12.6074185Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.894476ms policy-apex-pdp | [2024-04-24T08:59:12.558+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-24T08:58:12.613669599Z level=info msg="Executing migration" id="Update quota table charset" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-24T08:58:12.6137318Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=68.251µs policy-apex-pdp | [2024-04-24T08:59:12.558+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-24T08:58:12.620439027Z level=info msg="Executing migration" id="create plugin_setting table" policy-apex-pdp | [2024-04-24T08:59:12.563+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-04-24T08:58:12.621758443Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.317135ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-24T08:58:12.625127337Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-apex-pdp | [2024-04-24T08:59:12.563+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | session.timeout.ms = 45000 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.626411082Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.283685ms policy-apex-pdp | [2024-04-24T08:59:12.573+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.629663053Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.63420832Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.545027ms policy-apex-pdp | [2024-04-24T08:59:12.576+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.639596383Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.639619994Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.311µs policy-apex-pdp | [2024-04-24T08:59:12.584+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.641786965Z level=info msg="Executing migration" id="create session table" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | ssl.engine.factory.class = null kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.642719432Z level=info msg="Migration successfully executed" id="create session table" duration=933.767µs policy-apex-pdp | [2024-04-24T08:59:12.584+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | ssl.key.password = null kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.677629348Z level=info msg="Executing migration" id="Drop old table playlist table" policy-apex-pdp | [2024-04-24T08:59:12.669+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.677910193Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=280.165µs policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.682884708Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-apex-pdp | [2024-04-24T08:59:12.670+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.683119273Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=234.355µs policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | [2024-04-24T08:59:12.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.688054327Z level=info msg="Executing migration" id="create playlist table v2" policy-pap | ssl.keystore.key = null policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-24T08:59:12.682+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-04-24T08:58:12.6892225Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.164783ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.keystore.location = null kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-apex-pdp | [2024-04-24T08:59:56.081+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.2 - policyadmin [24/Apr/2024:08:59:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.51.2" grafana | logger=migrator t=2024-04-24T08:58:12.693226685Z level=info msg="Executing migration" id="create playlist item table v2" policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.694460579Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.235944ms policy-db-migrator | policy-pap | ssl.keystore.type = JKS kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.701749988Z level=info msg="Executing migration" id="Update playlist table charset" policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:12.701773058Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=25.27µs policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-24T08:58:12.705322427Z level=info msg="Executing migration" id="Update playlist_item table charset" kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.705343767Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.41µs kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-04-24T08:58:12.710397283Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.714701625Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.305602ms kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.719367485Z level=info msg="Executing migration" id="Add playlist column updated_at" kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.722534694Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.167169ms kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-04-24T08:58:12.729050838Z level=info msg="Executing migration" id="drop preferences table v2" kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.729131771Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.273µs kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-04-24T08:58:12.733951112Z level=info msg="Executing migration" id="drop preferences table v3" kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.734182457Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=230.595µs kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.73857675Z level=info msg="Executing migration" id="create preferences table v3" kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130645 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.740174171Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.596671ms kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql grafana | logger=migrator t=2024-04-24T08:58:12.746604024Z level=info msg="Executing migration" id="Update preferences table charset" kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.746641565Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=36.961µs kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6f651ac policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:12.751727131Z level=info msg="Executing migration" id="Add column team_id in preferences" kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.759066221Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=7.33334ms kafka | [2024-04-24 08:58:51,344] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.764498905Z level=info msg="Executing migration" id="Update team_id column values in preferences" kafka | [2024-04-24 08:58:51,347] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | allow.auto.create.topics = true policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.764679928Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=179.693µs kafka | [2024-04-24 08:58:51,349] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | > upgrade 0660-toscaparameter.sql grafana | logger=migrator t=2024-04-24T08:58:12.769133004Z level=info msg="Executing migration" id="Add column week_start in preferences" kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.772541469Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.408505ms kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-24T08:58:12.776574145Z level=info msg="Executing migration" id="Add column preferences.json_data" kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-24T08:58:12.77998427Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.409445ms kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | check.crcs = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:12.997242394Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:12.997378246Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=144.012µs kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.id = consumer-policy-pap-4 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.093357319Z level=info msg="Executing migration" id="Add preferences index org_id" kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.rack = grafana | logger=migrator t=2024-04-24T08:58:13.094287826Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=933.018µs kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-04-24T08:58:13.104027974Z level=info msg="Executing migration" id="Add preferences index user_id" kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.105482452Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.459118ms kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | enable.auto.commit = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.114619839Z level=info msg="Executing migration" id="create alert table v1" kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.116298552Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.678993ms kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.128468676Z level=info msg="Executing migration" id="add index alert org_id & id " kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.129822823Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.351657ms policy-pap | fetch.min.bytes = 1 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql grafana | logger=migrator t=2024-04-24T08:58:13.157724052Z level=info msg="Executing migration" id="add index alert state" policy-pap | group.id = policy-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.159011617Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.286655ms policy-pap | group.instance.id = null kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:13.179790857Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.181377648Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.585911ms policy-pap | interceptor.classes = [] kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.192767478Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-pap | internal.leave.group.on.close = true kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.195129223Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=2.364555ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-04-24T08:58:13.205022904Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-pap | isolation.level = read_uncommitted kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.205951943Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=931.019µs policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.262855011Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.264178717Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.323886ms policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.292983973Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-pap | max.poll.records = 500 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.299889637Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=6.904274ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-04-24T08:58:13.343723032Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-pap | metric.reporters = [] kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.345058388Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.337886ms policy-pap | metrics.num.samples = 2 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.352229437Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-pap | metrics.recording.level = INFO kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.353669985Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.440318ms policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.357723633Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.358185532Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=459.139µs policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-04-24T08:58:13.361940454Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.362641048Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=700.184µs policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.369178595Z level=info msg="Executing migration" id="create alert_notification table v1" policy-pap | request.timeout.ms = 30000 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.369929529Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=750.834µs policy-pap | retry.backoff.ms = 100 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.374583149Z level=info msg="Executing migration" id="Add column is_default" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.38032529Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.742111ms policy-pap | sasl.jaas.config = null kafka | [2024-04-24 08:58:51,354] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql grafana | logger=migrator t=2024-04-24T08:58:13.384300006Z level=info msg="Executing migration" id="Add column frequency" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-24 08:58:51,354] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.387711882Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.411056ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-24 08:58:51,354] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:13.393871371Z level=info msg="Executing migration" id="Add column send_reminder" policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.397273037Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.401166ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.401504619Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.405282291Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.779332ms policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0730-toscaproperty.sql grafana | logger=migrator t=2024-04-24T08:58:13.4082787Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-pap | sasl.login.class = null kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.409080435Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=801.355µs policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-24T08:58:13.414123932Z level=info msg="Executing migration" id="Update alert table charset" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.414146063Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=22.291µs policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.419616628Z level=info msg="Executing migration" id="Update alert_notification table charset" kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.419653149Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=37.251µs policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.424075324Z level=info msg="Executing migration" id="create notification_journal table v1" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.425226757Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.151383ms policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.428783655Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.430302865Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.518769ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.438620915Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.439288018Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=667.253µs kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.443281256Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.444473618Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.192052ms kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-04-24T08:58:13.449026526Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.450342942Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.317936ms kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:13.457208365Z level=info msg="Executing migration" id="Add for to alert table" kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.462913345Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.70406ms kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.466153627Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.470053862Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.900195ms kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.providers = null policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-04-24T08:58:13.47405959Z level=info msg="Executing migration" id="Update uid column values in alert_notification" kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.474224303Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=164.573µs kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.481118106Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.481980913Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=863.707µs kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.48651428Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.48805126Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.53705ms kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-04-24T08:58:13.49167283Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.496144506Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.471686ms policy-pap | ssl.engine.factory.class = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.501160043Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-pap | ssl.key.password = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.501216614Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=56.702µs policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.50515778Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.505954925Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=796.695µs policy-pap | ssl.keystore.key = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-04-24T08:58:13.510049584Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-pap | ssl.keystore.location = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.510914641Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=864.817µs policy-pap | ssl.keystore.password = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-04-24T08:58:13.517201332Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-pap | ssl.keystore.type = JKS kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.517326385Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=126.083µs policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.521314822Z level=info msg="Executing migration" id="create annotation table v5" policy-pap | ssl.provider = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.522188249Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=873.387µs policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-04-24T08:58:13.526358359Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.527293808Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=933.959µs policy-pap | ssl.truststore.certificates = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-04-24T08:58:13.532883906Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-pap | ssl.truststore.location = null kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.534206391Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.317475ms policy-pap | ssl.truststore.password = null kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.53881206Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-pap | ssl.truststore.type = JKS kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.539606905Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=792.865µs policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-04-24T08:58:13.543787006Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-pap | kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.545155822Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.366766ms policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.551545265Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130651 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.552982783Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.437068ms policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.556736276Z level=info msg="Executing migration" id="Update annotation table charset" kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.556761877Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=25.971µs kafka | [2024-04-24 08:58:51,359] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.562772442Z level=info msg="Executing migration" id="Add column region_id to annotation table" kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | > upgrade 0820-toscatrigger.sql grafana | logger=migrator t=2024-04-24T08:58:13.56888132Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.107248ms kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=93375c45-af5c-44c3-a127-0d1a90ab70ea, alive=false, publisher=null]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.573667463Z level=info msg="Executing migration" id="Drop category_id index" kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-24T08:58:50.666+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-04-24T08:58:13.574804965Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.135592ms kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | acks = -1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.578814092Z level=info msg="Executing migration" id="Add column tags to annotation table" kafka | [2024-04-24 08:58:51,360] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.58492776Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.111818ms kafka | [2024-04-24 08:58:51,361] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.588971329Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | batch.size = 16384 policy-db-migrator | kafka | [2024-04-24 08:58:51,361] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.589760454Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=788.815µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-04-24 08:58:51,361] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.596180918Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.597115905Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=931.187µs policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.601496381Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-pap | client.id = producer-1 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.602776605Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.280743ms policy-pap | compression.type = none policy-db-migrator | kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-24T08:58:13.608842492Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-db-migrator | kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-24T08:58:13.624119747Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.282835ms policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-24T08:58:13.628457181Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-24T08:58:13.629014842Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=557.681µs policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-24T08:58:13.633576839Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-04-24T08:58:13.63463645Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.057771ms policy-db-migrator | kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-04-24T08:58:13.63878944Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-04-24T08:58:13.63925096Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=461.34µs policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-04-24T08:58:13.644047812Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-04-24T08:58:13.644564952Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=516.741µs policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-24T08:58:13.650323483Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-24T08:58:13.650621849Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=299.356µs policy-db-migrator | kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-24T08:58:13.65688418Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-24T08:58:13.663363525Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.475695ms policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-24T08:58:13.668484673Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-24T08:58:13.6724248Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.940767ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-24T08:58:13.675397407Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-24T08:58:13.676411157Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.011ms policy-db-migrator | kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-24T08:58:13.681662698Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-db-migrator | kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-24T08:58:13.682996964Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.333866ms policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-24T08:58:13.686366159Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.686690775Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=324.336µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | request.timeout.ms = 30000 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.692559819Z level=info msg="Executing migration" id="Add epoch_end column" policy-db-migrator | -------------- policy-pap | retries = 2147483647 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.696707729Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.14698ms policy-db-migrator | policy-pap | retry.backoff.ms = 100 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.75945845Z level=info msg="Executing migration" id="Add index for epoch_end" policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-24T08:58:13.761250085Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.791795ms kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-24T08:58:13.770169598Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-24T08:58:13.770438493Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=269.094µs kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.779928256Z level=info msg="Executing migration" id="Move region to single row" policy-db-migrator | policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.780459576Z level=info msg="Migration successfully executed" id="Move region to single row" duration=535.551µs policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.788245156Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.797910863Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=9.664467ms policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.837173971Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | sasl.login.class = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.838535167Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.363556ms policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.844386011Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.845786088Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.399967ms policy-db-migrator | kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.8500702Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.851484618Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.414718ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.85781549Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.859681885Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.864805ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.862889548Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.864245924Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.356956ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.877376078Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.87750599Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=131.442µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.884562546Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.885862641Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.302775ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.891659833Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.892348756Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=688.943µs policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-24 08:58:51,398] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-04-24T08:58:13.895864474Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-24 08:58:51,398] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:13.896647479Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=782.715µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-24 08:58:51,437] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-04-24T08:58:13.901570004Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-24 08:58:51,450] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.902485453Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=914.839µs policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-24 08:58:51,452] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.906369147Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-24 08:58:51,453] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.906640552Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=271.445µs policy-pap | security.providers = null kafka | [2024-04-24 08:58:51,455] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-04-24T08:58:13.909480058Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-24 08:58:51,925] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.910034608Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=554.37µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-24 08:58:51,925] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-04-24T08:58:13.912689489Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-24 08:58:51,926] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.912842382Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=151.233µs policy-pap | ssl.cipher.suites = null kafka | [2024-04-24 08:58:51,926] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.921823875Z level=info msg="Executing migration" id="create team table" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-24 08:58:51,926] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.922935987Z level=info msg="Migration successfully executed" id="create team table" duration=1.110582ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-24 08:58:51,933] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-24T08:58:13.928503345Z level=info msg="Executing migration" id="add index team.org_id" policy-pap | ssl.engine.factory.class = null kafka | [2024-04-24 08:58:51,933] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.930446243Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.942057ms policy-pap | ssl.key.password = null kafka | [2024-04-24 08:58:51,933] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-04-24T08:58:13.934968589Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-24 08:58:51,933] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.935979779Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.01095ms policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-24 08:58:51,934] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.940416815Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | ssl.keystore.key = null kafka | [2024-04-24 08:58:51,939] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.946224007Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.807622ms policy-pap | ssl.keystore.location = null kafka | [2024-04-24 08:58:51,940] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-24T08:58:13.949094052Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | ssl.keystore.password = null kafka | [2024-04-24 08:58:51,940] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.949239225Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=144.823µs policy-pap | ssl.keystore.type = JKS kafka | [2024-04-24 08:58:51,940] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:13.952007588Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-24 08:58:51,940] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.952645481Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=637.523µs policy-pap | ssl.provider = null kafka | [2024-04-24 08:58:51,953] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.959557684Z level=info msg="Executing migration" id="create team member table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-24 08:58:51,954] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.960479782Z level=info msg="Migration successfully executed" id="create team member table" duration=919.608µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-24 08:58:51,954] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-24T08:58:13.963319596Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | ssl.truststore.certificates = null kafka | [2024-04-24 08:58:51,954] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.964343197Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.021181ms policy-pap | ssl.truststore.location = null kafka | [2024-04-24 08:58:51,954] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:13.967168191Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | ssl.truststore.password = null kafka | [2024-04-24 08:58:51,963] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.968226341Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.05751ms policy-pap | ssl.truststore.type = JKS kafka | [2024-04-24 08:58:51,964] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.971082037Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-24 08:58:51,964] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.972063766Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=982.009µs policy-pap | transactional.id = null kafka | [2024-04-24 08:58:51,964] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-04-24T08:58:13.976362819Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-24 08:58:51,965] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.980354036Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.989957ms policy-pap | kafka | [2024-04-24 08:58:51,978] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:13.983335233Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | [2024-04-24T08:58:50.675+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-04-24 08:58:51,979] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:13.987237209Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.900756ms policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-24 08:58:51,979] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.990010533Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-24 08:58:51,979] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:13.995145502Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.134428ms policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130688 kafka | [2024-04-24 08:58:51,979] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-04-24T08:58:13.999836942Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=93375c45-af5c-44c3-a127-0d1a90ab70ea, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-24 08:58:51,989] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.000956873Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.118931ms policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0583e7e0-8980-4e61-8167-9e42f04d3bdd, alive=false, publisher=null]]: starting policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:14.003928321Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" kafka | [2024-04-24 08:58:51,990] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-24T08:58:50.689+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.005055683Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.126592ms kafka | [2024-04-24 08:58:51,990] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | acks = -1 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.008189775Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-04-24 08:58:51,990] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.00921601Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.026615ms kafka | [2024-04-24 08:58:51,991] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-04-24T08:58:14.015367955Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-04-24 08:58:51,997] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.01633435Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=965.635µs kafka | [2024-04-24 08:58:51,997] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | buffer.memory = 33554432 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-24 08:58:51,997] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.019327749Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" kafka | [2024-04-24 08:58:51,997] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | client.id = producer-2 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.020435128Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.106449ms kafka | [2024-04-24 08:58:51,998] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | compression.type = none policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.024883261Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" kafka | [2024-04-24 08:58:52,005] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-04-24T08:58:14.02602794Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.139589ms kafka | [2024-04-24 08:58:52,006] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.029263954Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" kafka | [2024-04-24 08:58:52,006] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | enable.idempotence = true policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:14.030418742Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.154668ms kafka | [2024-04-24 08:58:52,006] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | interceptor.classes = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.03335012Z level=info msg="Executing migration" id="add index dashboard_permission" kafka | [2024-04-24 08:58:52,006] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.034364647Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.014927ms kafka | [2024-04-24 08:58:52,014] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | linger.ms = 0 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.037239874Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" kafka | [2024-04-24 08:58:52,015] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | max.block.ms = 60000 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-04-24T08:58:14.037635301Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=397.207µs kafka | [2024-04-24 08:58:52,015] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,015] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | max.request.size = 1048576 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-24 08:58:52,015] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,021] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-04-24T08:58:14.043477487Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | kafka | [2024-04-24 08:58:52,021] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-04-24T08:58:14.04367149Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=194.253µs policy-db-migrator | kafka | [2024-04-24 08:58:52,021] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-04-24T08:58:14.045741394Z level=info msg="Executing migration" id="create tag table" policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-04-24 08:58:52,022] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-04-24T08:58:14.046377525Z level=info msg="Migration successfully executed" id="create tag table" duration=635.591µs policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,022] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-04-24T08:58:14.050284389Z level=info msg="Executing migration" id="add index tag.key_value" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-24 08:58:52,028] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-04-24T08:58:14.051717103Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.432604ms policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,028] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-04-24T08:58:14.055297142Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-04-24 08:58:52,028] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-04-24T08:58:14.055955062Z level=info msg="Migration successfully executed" id="create login attempt table" duration=657.41µs policy-db-migrator | policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-04-24T08:58:14.059293058Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | kafka | [2024-04-24 08:58:52,029] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-04-24T08:58:14.060319984Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.023506ms policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-04-24 08:58:52,029] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-24T08:58:14.064487293Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,035] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-24T08:58:14.065613292Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.126449ms policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-24 08:58:52,036] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.068864644Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 kafka | [2024-04-24 08:58:52,036] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.088164762Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.300008ms policy-db-migrator | policy-pap | retries = 2147483647 kafka | [2024-04-24 08:58:52,036] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.114570687Z level=info msg="Executing migration" id="create login_attempt v2" policy-db-migrator | policy-pap | retry.backoff.ms = 100 kafka | [2024-04-24 08:58:52,036] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.116106052Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.534026ms policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-24T08:58:14.121677444Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-24 08:58:52,045] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.123356901Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.674807ms policy-pap | sasl.jaas.config = null kafka | [2024-04-24 08:58:52,046] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:14.127308556Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" kafka | [2024-04-24 08:58:52,047] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.127896696Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=588.52µs kafka | [2024-04-24 08:58:52,047] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.131439674Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" kafka | [2024-04-24 08:58:52,047] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.132106475Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=666.401µs kafka | [2024-04-24 08:58:52,056] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-24T08:58:14.145041168Z level=info msg="Executing migration" id="create user auth table" kafka | [2024-04-24 08:58:52,057] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.146469481Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.424853ms kafka | [2024-04-24 08:58:52,057] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | sasl.login.class = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-24T08:58:14.182140998Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" kafka | [2024-04-24 08:58:52,057] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.183903747Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.735048ms kafka | [2024-04-24 08:58:52,057] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.207318352Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" kafka | [2024-04-24 08:58:52,063] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.207568836Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=250.924µs kafka | [2024-04-24 08:58:52,064] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0100-pdp.sql grafana | logger=migrator t=2024-04-24T08:58:14.232672009Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-04-24 08:58:52,064] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.241964601Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=9.289362ms kafka | [2024-04-24 08:58:52,064] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-04-24T08:58:14.254141842Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" kafka | [2024-04-24 08:58:52,064] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.260916534Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.780272ms kafka | [2024-04-24 08:58:52,071] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.269414444Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" kafka | [2024-04-24 08:58:52,072] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.272978622Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.572838ms kafka | [2024-04-24 08:58:52,072] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-04-24T08:58:14.285122042Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" kafka | [2024-04-24 08:58:52,072] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.290560291Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.441509ms kafka | [2024-04-24 08:58:52,073] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-04-24T08:58:14.296662782Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" kafka | [2024-04-24 08:58:52,080] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.298216017Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.552835ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.302443697Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" kafka | [2024-04-24 08:58:52,081] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.307796175Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.352008ms kafka | [2024-04-24 08:58:52,081] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql grafana | logger=migrator t=2024-04-24T08:58:14.312275528Z level=info msg="Executing migration" id="create server_lock table" kafka | [2024-04-24 08:58:52,081] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.313144363Z level=info msg="Migration successfully executed" id="create server_lock table" duration=869.264µs kafka | [2024-04-24 08:58:52,081] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-04-24T08:58:14.321595781Z level=info msg="Executing migration" id="add index server_lock.operation_uid" kafka | [2024-04-24 08:58:52,093] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.323033615Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.444114ms kafka | [2024-04-24 08:58:52,094] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.330961886Z level=info msg="Executing migration" id="create user auth token table" kafka | [2024-04-24 08:58:52,094] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.332015013Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.055837ms kafka | [2024-04-24 08:58:52,094] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-04-24T08:58:14.337871439Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" kafka | [2024-04-24 08:58:52,094] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.338768634Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=897.575µs kafka | [2024-04-24 08:58:52,102] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-04-24T08:58:14.342863092Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" kafka | [2024-04-24 08:58:52,103] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.343772077Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=909.335µs kafka | [2024-04-24 08:58:52,103] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.34701723Z level=info msg="Executing migration" id="add index user_auth_token.user_id" kafka | [2024-04-24 08:58:52,103] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.348005896Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=988.686µs kafka | [2024-04-24 08:58:52,103] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql kafka | [2024-04-24 08:58:52,113] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.352778285Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | ssl.key.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.358531309Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.752774ms kafka | [2024-04-24 08:58:52,114] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-04-24T08:58:14.36283987Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-04-24 08:58:52,114] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.363776835Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=934.145µs kafka | [2024-04-24 08:58:52,114] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.368746868Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-04-24 08:58:52,114] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.369568581Z level=info msg="Migration successfully executed" id="create cache_data table" duration=821.833µs kafka | [2024-04-24 08:58:52,121] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keystore.password = null policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-04-24T08:58:14.374495252Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-04-24 08:58:52,121] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.375402047Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=906.615µs kafka | [2024-04-24 08:58:52,121] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.379968772Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-04-24 08:58:52,121] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.381029779Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.062107ms policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-24T08:58:14.384271763Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-04-24 08:58:52,121] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-24T08:58:14.385428351Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.156768ms kafka | [2024-04-24 08:58:52,128] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-24T08:58:14.3913824Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-04-24 08:58:52,128] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-24T08:58:14.391584083Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=223.144µs kafka | [2024-04-24 08:58:52,128] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-24T08:58:14.394811006Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-04-24 08:58:52,128] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-24T08:58:14.394904278Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=93.722µs kafka | [2024-04-24 08:58:52,128] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-04-24T08:58:14.401014208Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-04-24 08:58:52,134] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-pap | transactional.id = null grafana | logger=migrator t=2024-04-24T08:58:14.402034005Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.021627ms kafka | [2024-04-24 08:58:52,134] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-24T08:58:14.405159177Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-04-24 08:58:52,134] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-pap | grafana | logger=migrator t=2024-04-24T08:58:14.405908039Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=745.762µs kafka | [2024-04-24 08:58:52,134] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:50.690+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. grafana | logger=migrator t=2024-04-24T08:58:14.411529541Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-04-24 08:58:52,134] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-24T08:58:14.41328035Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.752879ms kafka | [2024-04-24 08:58:52,141] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-24T08:58:14.416628825Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | [2024-04-24 08:58:52,141] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130692 grafana | logger=migrator t=2024-04-24T08:58:14.416695326Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=67.431µs kafka | [2024-04-24 08:58:52,141] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0583e7e0-8980-4e61-8167-9e42f04d3bdd, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-04-24T08:58:14.421634777Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-04-24 08:58:52,141] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator grafana | logger=migrator t=2024-04-24T08:58:14.422576832Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=942.195µs kafka | [2024-04-24 08:58:52,142] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | JOIN pdpstatistics b policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher grafana | logger=migrator t=2024-04-24T08:58:14.426313734Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-04-24 08:58:52,149] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-pap | [2024-04-24T08:58:50.696+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher grafana | logger=migrator t=2024-04-24T08:58:14.427215199Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=904.315µs kafka | [2024-04-24 08:58:52,149] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | SET a.id = b.id policy-pap | [2024-04-24T08:58:50.696+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers grafana | logger=migrator t=2024-04-24T08:58:14.431401658Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-04-24 08:58:52,150] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-pap | [2024-04-24T08:58:50.702+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers grafana | logger=migrator t=2024-04-24T08:58:14.432321443Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=922.145µs policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock grafana | logger=migrator t=2024-04-24T08:58:14.436831017Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,150] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-04-24T08:58:14.437769452Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=938.405µs policy-db-migrator | kafka | [2024-04-24 08:58:52,150] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer grafana | logger=migrator t=2024-04-24T08:58:14.442241687Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | kafka | [2024-04-24 08:58:52,157] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-24T08:58:50.707+00:00|INFO|ServiceManager|main] Policy PAP started grafana | logger=migrator t=2024-04-24T08:58:14.449029398Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.792821ms policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-04-24 08:58:52,157] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-04-24T08:58:14.452528375Z level=info msg="Executing migration" id="drop alert_definition table" policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-04-24T08:58:14.453456711Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=928.376µs kafka | [2024-04-24 08:58:52,157] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.893 seconds (process running for 10.499) kafka | [2024-04-24 08:58:52,157] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-04-24T08:58:14.45953079Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | [2024-04-24T08:58:51.119+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.459612541Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.391µs kafka | [2024-04-24 08:58:52,157] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-24T08:58:51.119+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.462404727Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-04-24 08:58:52,164] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-24T08:58:51.119+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.463826051Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.418954ms kafka | [2024-04-24 08:58:52,164] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-24T08:58:51.123+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-04-24T08:58:14.510959047Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-04-24 08:58:52,164] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-04-24T08:58:51.161+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,164] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-24T08:58:51.161+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg policy-pap | [2024-04-24T08:58:51.230+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,164] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-24T08:58:51.233+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-04-24 08:58:52,176] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.513341536Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.385908ms policy-pap | [2024-04-24T08:58:51.233+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,176] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.519747471Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | [2024-04-24T08:58:51.303+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-24 08:58:52,177] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.520826539Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.079078ms policy-pap | [2024-04-24T08:58:51.340+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-04-24 08:58:52,177] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.526200907Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-pap | [2024-04-24T08:58:51.409+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-04-24 08:58:52,177] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.52638293Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=144.743µs policy-pap | [2024-04-24T08:58:51.446+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,185] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.534432872Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | [2024-04-24T08:58:51.515+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,185] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.536350354Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.918202ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:51.553+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,185] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.541901475Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | policy-pap | [2024-04-24T08:58:51.622+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,185] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.543510182Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.610267ms policy-pap | [2024-04-24T08:58:51.659+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,185] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-04-24T08:58:14.547973036Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | [2024-04-24T08:58:51.727+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,197] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.549660643Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.690997ms policy-pap | [2024-04-24T08:58:51.765+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,198] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-04-24T08:58:14.555439758Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | [2024-04-24T08:58:51.832+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,198] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.556543786Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.103958ms policy-pap | [2024-04-24T08:58:51.869+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,198] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.560907768Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | [2024-04-24T08:58:51.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,198] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.567284883Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.375435ms policy-pap | [2024-04-24T08:58:51.975+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,231] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.619909408Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | [2024-04-24T08:58:52.044+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,232] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-04-24T08:58:14.621127338Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.22232ms policy-pap | [2024-04-24T08:58:52.079+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,232] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.625197496Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | [2024-04-24T08:58:52.152+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,232] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-04-24T08:58:14.626201422Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.004196ms policy-pap | [2024-04-24T08:58:52.188+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,233] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.6388504Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | [2024-04-24T08:58:52.261+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,239] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.670324917Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=31.475937ms policy-pap | [2024-04-24T08:58:52.292+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,240] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.674755601Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | [2024-04-24T08:58:52.372+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,240] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-24T08:58:14.700814889Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.060698ms policy-pap | [2024-04-24T08:58:52.396+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,240] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.704358278Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | [2024-04-24T08:58:52.480+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-24 08:58:52,240] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-04-24T08:58:14.705359954Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.001096ms policy-pap | [2024-04-24T08:58:52.534+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-24 08:58:52,248] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.709106566Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | [2024-04-24T08:58:52.540+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-24 08:58:52,248] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.710146373Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.039667ms policy-pap | [2024-04-24T08:58:52.565+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c kafka | [2024-04-24 08:58:52,249] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.718565481Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | [2024-04-24T08:58:52.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-24 08:58:52,249] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-24T08:58:14.726825867Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.252636ms policy-pap | [2024-04-24T08:58:52.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-24 08:58:52,249] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UfYjnzzkRPeYang4gRgPIg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:14.731009456Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-pap | [2024-04-24T08:58:52.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-04-24 08:58:52,255] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.740350679Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.340823ms policy-pap | [2024-04-24T08:58:52.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] (Re-)joining group policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,256] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.743826736Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Request joining group due to: need to re-join with the given member-id: consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d policy-db-migrator | kafka | [2024-04-24 08:58:52,256] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.74524688Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.416814ms policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | kafka | [2024-04-24 08:58:52,256] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.74953087Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] (Re-)joining group policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-04-24 08:58:52,256] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.750592838Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.061508ms policy-pap | [2024-04-24T08:58:55.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c', protocol='range'} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,277] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.753801541Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | [2024-04-24T08:58:55.599+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-24 08:58:52,278] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.754942599Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.140688ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:55.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d', protocol='range'} kafka | [2024-04-24 08:58:52,278] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.758561319Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-db-migrator | policy-pap | [2024-04-24T08:58:55.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Finished assignment for group at generation 1: {consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-24 08:58:52,278] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.75985723Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.295441ms policy-db-migrator | policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c', protocol='range'} kafka | [2024-04-24 08:58:52,278] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.764292334Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d', protocol='range'} kafka | [2024-04-24 08:58:52,286] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.764386515Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=94.611µs policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-24 08:58:52,287] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.767658358Z level=info msg="Executing migration" id="add column for to alert_rule" policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-24 08:58:52,287] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.77382165Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.161852ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:55.625+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-24 08:58:52,287] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.777883907Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-db-migrator | policy-pap | [2024-04-24T08:58:55.626+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-24 08:58:52,287] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.78297086Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.086923ms policy-db-migrator | policy-pap | [2024-04-24T08:58:55.661+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-24 08:58:52,295] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.787417834Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-04-24T08:58:55.662+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-24 08:58:52,295] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.791600003Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.181539ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:55.674+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-24 08:58:52,295] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.79508835Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-04-24T08:58:55.675+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-24 08:58:52,295] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.795771842Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=681.572µs policy-db-migrator | -------------- policy-pap | [2024-04-24T08:58:59.736+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-04-24 08:58:52,296] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.798886432Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-db-migrator | policy-pap | [2024-04-24T08:58:59.737+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' kafka | [2024-04-24 08:58:52,306] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.799753776Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=866.804µs policy-db-migrator | policy-pap | [2024-04-24T08:58:59.738+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms kafka | [2024-04-24 08:58:52,307] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.804587686Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-04-24T08:59:12.418+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: kafka | [2024-04-24 08:58:52,307] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.808818136Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.23165ms policy-db-migrator | -------------- policy-pap | [] kafka | [2024-04-24 08:58:52,307] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.812038589Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | [2024-04-24T08:59:12.419+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-24 08:58:52,307] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.816521402Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.482153ms grafana | logger=migrator t=2024-04-24T08:58:14.82063321Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-04-24 08:58:52,317] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.821633526Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.000036ms grafana | logger=migrator t=2024-04-24T08:58:14.826044209Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-pap | [2024-04-24T08:59:12.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,318] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.833635864Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.590875ms policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-04-24 08:58:52,318] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.837139432Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-pap | [2024-04-24T08:59:12.426+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,318] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.843029558Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.893076ms policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-04-24 08:58:52,318] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.945620476Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting listener policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,327] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:14.945791039Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=175.183µs policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting timer policy-db-migrator | kafka | [2024-04-24 08:58:52,328] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:14.951017354Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | [2024-04-24T08:59:12.498+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,329] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.952910036Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.892832ms policy-pap | [2024-04-24T08:59:12.499+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting enqueue policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-04-24 08:58:52,329] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.958715181Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | [2024-04-24T08:59:12.499+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] kafka | [2024-04-24 08:58:52,329] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:14.959780839Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.065468ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.501+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,342] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.963222516Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,344] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:14.964308243Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.085317ms policy-pap | [2024-04-24T08:59:12.501+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate started kafka | [2024-04-24 08:58:52,344] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.968731977Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-pap | [2024-04-24T08:59:12.534+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-04-24T08:58:14.968814728Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=83.271µs policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,345] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:14.972045Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-04-24 08:58:52,345] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-24T08:58:14.981893132Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.848222ms kafka | [2024-04-24 08:58:52,351] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-24T08:58:15.038606751Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-04-24 08:58:52,353] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.044785012Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.181501ms policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,353] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.052052232Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-24 08:58:52,353] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.060487059Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.424728ms policy-pap | [2024-04-24T08:59:12.553+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,353] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=migrator t=2024-04-24T08:58:15.065356989Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} kafka | [2024-04-24 08:58:52,362] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.071642501Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.285672ms policy-pap | [2024-04-24T08:59:12.553+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-24 08:58:52,364] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.10036519Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-pap | [2024-04-24T08:59:12.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,364] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.109233055Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.869455ms policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,365] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql grafana | logger=migrator t=2024-04-24T08:58:15.113885081Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping kafka | [2024-04-24 08:58:52,365] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.114045783Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=160.302µs policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping enqueue kafka | [2024-04-24 08:58:52,379] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-04-24T08:58:15.117572521Z level=info msg="Executing migration" id=create_alert_configuration_table policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping timer kafka | [2024-04-24 08:58:52,380] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:15.118477375Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=904.274µs kafka | [2024-04-24 08:58:52,380] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-24T08:58:15.132666938Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-04-24 08:58:52,381] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-04-24T08:58:15.142773022Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.105674ms kafka | [2024-04-24 08:58:52,381] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] grafana | logger=migrator t=2024-04-24T08:58:15.148607628Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-04-24 08:58:52,392] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping listener grafana | logger=migrator t=2024-04-24T08:58:15.14869397Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=86.162µs kafka | [2024-04-24 08:58:52,393] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopped grafana | logger=migrator t=2024-04-24T08:58:15.1524392Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-04-24 08:58:52,393] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.157056516Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.616526ms policy-db-migrator | policy-pap | [2024-04-24T08:59:12.562+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate successful kafka | [2024-04-24 08:58:52,394] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.160108776Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 start publishing next request kafka | [2024-04-24 08:58:52,394] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.161191683Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.082807ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting kafka | [2024-04-24 08:58:52,404] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:15.16468685Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting listener kafka | [2024-04-24 08:58:52,406] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:15.17142434Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.73687ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting timer kafka | [2024-04-24 08:58:52,406] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.175914844Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] kafka | [2024-04-24 08:58:52,407] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.176566784Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=650.45µs policy-db-migrator | policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting enqueue kafka | [2024-04-24 08:58:52,407] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.181633247Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange started kafka | [2024-04-24 08:58:52,418] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:15.183302644Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.667487ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] kafka | [2024-04-24 08:58:52,420] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:15.188325256Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,420] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.195389221Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.065035ms policy-db-migrator | -------------- policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,425] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.199102943Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | policy-pap | [2024-04-24T08:59:12.605+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-24T08:58:15.199681221Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=577.888µs kafka | [2024-04-24 08:58:52,425] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | msg policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,435] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:15.207188634Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | upgrade to 1100 completed policy-pap | [2024-04-24T08:59:12.605+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-24 08:58:52,436] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:15.208549066Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.363712ms policy-db-migrator | policy-pap | [2024-04-24T08:59:12.611+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,436] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.21369334Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,436] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.214355451Z level=info msg="Migration successfully executed" id="create alert_image table" duration=663.111µs policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping kafka | [2024-04-24 08:58:52,436] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping enqueue kafka | [2024-04-24 08:58:52,443] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-24T08:58:15.218481298Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping timer kafka | [2024-04-24 08:58:52,444] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-24T08:58:15.21916781Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=686.482µs policy-db-migrator | policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] kafka | [2024-04-24 08:58:52,444] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.222263251Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping listener kafka | [2024-04-24 08:58:52,444] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-24T08:58:15.222313112Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.761µs policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopped kafka | [2024-04-24 08:58:52,444] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.22653042Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange successful kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.227196621Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=666.101µs policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 start publishing next request kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.233612796Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.235335764Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.726878ms policy-db-migrator | policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting listener kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.240228344Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting timer kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.2406198Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=e1bfc2a1-b68d-4b0d-960e-f7897689b4f6, expireMs=1713949182659] kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.244497973Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting enqueue kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.244931361Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=433.448µs policy-db-migrator | policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate started kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.248101403Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-db-migrator | kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-24T08:58:15.249142449Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.040376ms kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-24T08:58:15.252201529Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | [2024-04-24T08:59:12.663+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-04-24T08:58:15.259356177Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.154168ms policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.263964381Z level=info msg="Executing migration" id="create library_element table v1" policy-pap | [2024-04-24T08:59:12.663+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 77293ae2-da7e-415d-9361-5e79c680736b policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.26570803Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.743829ms policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.2730945Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.274147768Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.062778ms policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.27857237Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.279283691Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=710.751µs policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.28589238Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | policy-pap | [2024-04-24T08:59:12.669+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c5968f1a-b7af-452f-bf63-1bacb67aef0f kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.28717277Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.28084ms policy-db-migrator | policy-pap | [2024-04-24T08:59:12.671+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.290719589Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.291850407Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.131838ms policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.671+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.329264778Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-04-24T08:59:12.678+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.329339549Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=81.221µs policy-db-migrator | -------------- policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:15.334557364Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | policy-pap | [2024-04-24T08:59:12.678+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-04-24T08:58:15.334666146Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=98.352µs kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-24T08:59:12.683+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-04-24T08:58:15.338054331Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-04-24T08:58:15.338373207Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=319.916µs policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-04-24T08:59:12.684+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e1bfc2a1-b68d-4b0d-960e-f7897689b4f6 grafana | logger=migrator t=2024-04-24T08:58:15.341943465Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | [2024-04-24T08:59:12.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.34290186Z level=info msg="Migration successfully executed" id="create data_keys table" duration=958.665µs kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.346967297Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping enqueue policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-04-24T08:58:15.348084445Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.123618ms kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping timer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.353374341Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e1bfc2a1-b68d-4b0d-960e-f7897689b4f6, expireMs=1713949182659] policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.388039128Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.658417ms kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping listener policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.391100838Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopped policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-04-24T08:58:15.39922428Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.122992ms kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | [2024-04-24T08:59:12.689+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate successful policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.406286726Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | [2024-04-24T08:59:12.689+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 has no more requests policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-04-24T08:58:15.406394528Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=108.442µs kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-04-24T08:59:20.226+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.411245337Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-04-24T08:59:20.300+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.443198398Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.948061ms kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-04-24T08:59:20.308+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.467943593Z level=info msg="Executing migration" id="rename data_keys id column back to name" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-04-24T08:59:20.313+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-04-24T08:58:15.497063398Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.123095ms kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-04-24T08:59:20.741+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.501925767Z level=info msg="Executing migration" id="create kv_store table v1" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | [2024-04-24T08:59:21.283+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.502806891Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=881.084µs kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-04-24T08:59:21.284+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.50577306Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-04-24T08:59:21.892+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-04-24T08:58:15.506849877Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.076517ms kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-04-24T08:59:22.117+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.514760007Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-04-24T08:59:22.207+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-04-24T08:58:15.515008241Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=248.634µs kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-04-24T08:59:22.207+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.519192799Z level=info msg="Executing migration" id="create permission table" kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | [2024-04-24T08:59:22.208+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.520052063Z level=info msg="Migration successfully executed" id="create permission table" duration=860.474µs kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-pap | [2024-04-24T08:59:22.223+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin)] policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.533434152Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-04-24 08:58:52,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:22.972+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-04-24T08:58:15.534733063Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.303702ms kafka | [2024-04-24 08:58:52,464] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.537868115Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-04-24T08:58:15.539081404Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.213059ms kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-24T08:58:15.544493612Z level=info msg="Executing migration" id="create role table" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:22.975+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup policy-db-migrator | grafana | logger=migrator t=2024-04-24T08:58:15.545461458Z level=info msg="Migration successfully executed" id="create role table" duration=968.416µs kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:22.988+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin)] policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-04-24T08:58:15.55110483Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-db-migrator | name version grafana | logger=migrator t=2024-04-24T08:58:15.558509281Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.404201ms kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-04-24T08:58:15.561467399Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-04-24T08:58:15.568897961Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.429902ms kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.572337087Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.573402094Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.064097ms kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.578396336Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-24T08:59:23.351+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-24T08:59:23Z, user=policyadmin)] policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.579464233Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.066757ms policy-pap | [2024-04-24T08:59:42.499+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.583659472Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-pap | [2024-04-24T08:59:42.563+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.585299268Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.640376ms policy-pap | [2024-04-24T08:59:43.926+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.590277Z level=info msg="Executing migration" id="create team role table" policy-pap | [2024-04-24T08:59:43.928+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.591113414Z level=info msg="Migration successfully executed" id="create team role table" duration=841.345µs kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.594603931Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.595729229Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.124988ms kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.599049323Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.600172552Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.122799ms kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.604427751Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.605482978Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.055027ms kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.61290944Z level=info msg="Executing migration" id="create user role table" kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.613754453Z level=info msg="Migration successfully executed" id="create user role table" duration=847.253µs kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.618086114Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.619124401Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.038337ms kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.623944989Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.625040998Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.096099ms kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.632179904Z level=info msg="Executing migration" id="add index user_role.user_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.633999604Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.81881ms kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.637906317Z level=info msg="Executing migration" id="create builtin role table" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 grafana | logger=migrator t=2024-04-24T08:58:15.639411983Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.505196ms kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.643939516Z level=info msg="Executing migration" id="add index builtin_role.role_id" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.645074335Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.134759ms kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.648414399Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:15.649906464Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.490475ms policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:15.653579274Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:15.661077247Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.499413ms policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:15.666139059Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:15.667220306Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.080537ms policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:15.670852155Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:15.67173645Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=884.435µs policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:15.67479703Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.675640914Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=844.214µs kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.679892513Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.681015792Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.125909ms kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.745126679Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.746397029Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.26978ms kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.751392621Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.752621391Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.22892ms kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.758241502Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.766461847Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.217455ms kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.771380597Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 grafana | logger=migrator t=2024-04-24T08:58:15.777031349Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.650952ms kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.780369114Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.786341071Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.970817ms kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.790800025Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.798590842Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.788827ms kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.801957787Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.803166026Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.207869ms kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.806670023Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.807934044Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.263601ms kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.812249114Z level=info msg="Executing migration" id="remove permission role_id action scope index" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.813379153Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.130139ms kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.816680027Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.817705863Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.025266ms kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.858983848Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.860029625Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.049738ms kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.864414566Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.864464337Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=52.551µs kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:15.868000805Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.868068146Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=67.621µs kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.890391531Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.890727687Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=336.386µs kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.894940205Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.895332431Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=392.546µs kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 grafana | logger=migrator t=2024-04-24T08:58:15.898113047Z level=info msg="Executing migration" id="dashboard permissions uid scopes" kafka | [2024-04-24 08:58:52,469] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.898597565Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=484.568µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.90137063Z level=info msg="Executing migration" id="drop managed folder create actions" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.901508982Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=138.272µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.908422835Z level=info msg="Executing migration" id="alerting notification permissions" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.908782161Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=361.156µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.911937423Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.912501052Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=564.569µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.915575192Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.916322264Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=746.782µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.919996074Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.925948011Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.950777ms kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.930858682Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.930948833Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=64.831µs kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.933590746Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.934348879Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=757.813µs kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 grafana | logger=migrator t=2024-04-24T08:58:15.941851201Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.942638484Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=787.333µs kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.947483573Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.948274526Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=790.673µs kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.951178873Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.96264243Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.464947ms kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.965521197Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.96629083Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=769.633µs kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.971946583Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.97306557Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.118377ms kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:15.979152Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.002209727Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.059497ms kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.007832139Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.008622081Z level=info msg="Migration successfully executed" id="create correlation v2" duration=789.842µs kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.011849314Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.013006093Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.156079ms kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 grafana | logger=migrator t=2024-04-24T08:58:16.015977401Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.017222271Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.24425ms kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.022549139Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.023738689Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.18932ms kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.027804704Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.028201531Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=397.267µs policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.031467545Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.032618823Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.151318ms policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.037715047Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.046865106Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.147809ms policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.049886755Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.052058311Z level=info msg="Migration successfully executed" id="create entity_events table" duration=2.170686ms policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.05513609Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.056798398Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.661838ms policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.061140059Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.061880171Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.065608151Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.06611045Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2404240858201100u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.069668668Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.070487501Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=818.353µs kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.075042636Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 grafana | logger=migrator t=2024-04-24T08:58:16.076595051Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.549125ms kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:26 grafana | logger=migrator t=2024-04-24T08:58:16.08023191Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 grafana | logger=migrator t=2024-04-24T08:58:16.08205122Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.81885ms kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 grafana | logger=migrator t=2024-04-24T08:58:16.085348115Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 grafana | logger=migrator t=2024-04-24T08:58:16.087274425Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.92583ms kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-04-24T08:58:16.091402923Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.092491621Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.088598ms kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.130686934Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.132330342Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.643918ms kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.136654242Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.13781966Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.165018ms kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.14138117Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.142564638Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.183178ms kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.145764921Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.14693534Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.170039ms kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.150900844Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.153136901Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.235737ms kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.156765861Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-04-24T08:58:16.158538109Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.772478ms kafka | [2024-04-24 08:58:52,478] INFO [Broker id=1] Finished LeaderAndIsr request in 1124ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.162658407Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-04-24T08:58:16.187421761Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.763614ms kafka | [2024-04-24 08:58:52,481] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=3d7pexomSuav55xzl5U12w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=UfYjnzzkRPeYang4gRgPIg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.198669784Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.210508608Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.838554ms kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.213978784Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.220027763Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.048729ms kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.223088453Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.223340248Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=251.815µs kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.227741549Z level=info msg="Executing migration" id="add share column" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.240865554Z level=info msg="Migration successfully executed" id="add share column" duration=13.121715ms kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.24430429Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.244471433Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=165.813µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.247784767Z level=info msg="Executing migration" id="create file table" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.248486449Z level=info msg="Migration successfully executed" id="create file table" duration=702.411µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.253885676Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.255717566Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.83034ms kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.259336505Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-04-24T08:58:16.260510265Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.17335ms kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.26390722Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.264740074Z level=info msg="Migration successfully executed" id="create file_meta table" duration=832.514µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.268657437Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.269901678Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.244521ms kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.275662431Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.275727022Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=67.011µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.279508235Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.279645517Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=137.752µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.282580715Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.283432938Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=852.133µs kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.287907162Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.288168066Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=258.864µs kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.291566481Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.293024755Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.458134ms kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.296733985Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.306366553Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.631148ms kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.309772189Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.309972152Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=199.603µs kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.314694049Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.31594296Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.248761ms kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.319709541Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.320150818Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=440.267µs kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.323643605Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.324030041Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=387.016µs kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.328302972Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.329122045Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=817.943µs kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.332588842Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.342455623Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.865611ms kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.34598816Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.356140336Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.149146ms kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.359830487Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.36065918Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=825.873µs kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.365034831Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.435061445Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=70.023504ms kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.597497397Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-04-24 08:58:52,490] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.599538731Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.044623ms kafka | [2024-04-24 08:58:52,491] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-04-24T08:58:16.605720712Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-04-24 08:58:52,558] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.607183175Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.462113ms kafka | [2024-04-24 08:58:52,577] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.611840382Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-04-24T08:58:16.637112304Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.272412ms kafka | [2024-04-24 08:58:52,597] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a in Empty state. Created a new member id consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.641392414Z level=info msg="Executing migration" id="add origin column to seed_assignment" kafka | [2024-04-24 08:58:52,602] INFO [GroupCoordinator 1]: Preparing to rebalance group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.650751387Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.358653ms kafka | [2024-04-24 08:58:52,830] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 in Empty state. Created a new member id consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.656310907Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" kafka | [2024-04-24 08:58:52,834] INFO [GroupCoordinator 1]: Preparing to rebalance group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.656555001Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=244.714µs kafka | [2024-04-24 08:58:55,587] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.659610542Z level=info msg="Executing migration" id="prevent seeding OnCall access" kafka | [2024-04-24 08:58:55,603] INFO [GroupCoordinator 1]: Stabilized group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.659755384Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=146.272µs kafka | [2024-04-24 08:58:55,609] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.662503999Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-04-24 08:58:55,610] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d for group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.662873934Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=370.625µs kafka | [2024-04-24 08:58:55,835] INFO [GroupCoordinator 1]: Stabilized group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.666455604Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-04-24 08:58:55,851] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 for group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-04-24T08:58:16.666818669Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=363.505µs grafana | logger=migrator t=2024-04-24T08:58:16.67117534Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-04-24T08:58:16.671430185Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=254.595µs grafana | logger=migrator t=2024-04-24T08:58:16.674077087Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-04-24T08:58:16.675032383Z level=info msg="Migration successfully executed" id="create folder table" duration=955.186µs grafana | logger=migrator t=2024-04-24T08:58:16.678566661Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-04-24T08:58:16.680461862Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.894901ms grafana | logger=migrator t=2024-04-24T08:58:16.685091457Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-04-24T08:58:16.686368008Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.276161ms grafana | logger=migrator t=2024-04-24T08:58:16.68951716Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-04-24T08:58:16.689543251Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.781µs grafana | logger=migrator t=2024-04-24T08:58:16.692807343Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-04-24T08:58:16.694067064Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.259281ms grafana | logger=migrator t=2024-04-24T08:58:16.698265353Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-04-24T08:58:16.699486093Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.22121ms grafana | logger=migrator t=2024-04-24T08:58:16.703042011Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-04-24T08:58:16.704631396Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.588165ms grafana | logger=migrator t=2024-04-24T08:58:16.708351847Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-04-24T08:58:16.709059078Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=707.721µs grafana | logger=migrator t=2024-04-24T08:58:16.713483951Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-04-24T08:58:16.713778556Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=292.825µs grafana | logger=migrator t=2024-04-24T08:58:16.718943791Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2024-04-24T08:58:16.720081019Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.141258ms grafana | logger=migrator t=2024-04-24T08:58:16.723936932Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2024-04-24T08:58:16.724890117Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=953.305µs grafana | logger=migrator t=2024-04-24T08:58:16.730348257Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2024-04-24T08:58:16.73120214Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=853.853µs grafana | logger=migrator t=2024-04-24T08:58:16.821090228Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2024-04-24T08:58:16.822075505Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=985.167µs grafana | logger=migrator t=2024-04-24T08:58:16.82549455Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2024-04-24T08:58:16.826373215Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=878.845µs grafana | logger=migrator t=2024-04-24T08:58:16.831355926Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-04-24T08:58:16.832129378Z level=info msg="Migration successfully executed" id="create anon_device table" duration=773.112µs grafana | logger=migrator t=2024-04-24T08:58:16.835504604Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-04-24T08:58:16.836492659Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=988.035µs grafana | logger=migrator t=2024-04-24T08:58:16.840542606Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-04-24T08:58:16.841513592Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=970.826µs grafana | logger=migrator t=2024-04-24T08:58:16.844785235Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-04-24T08:58:16.845545497Z level=info msg="Migration successfully executed" id="create signing_key table" duration=760.182µs grafana | logger=migrator t=2024-04-24T08:58:16.849675845Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-04-24T08:58:16.850613341Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=937.526µs grafana | logger=migrator t=2024-04-24T08:58:16.859540766Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-04-24T08:58:16.861516508Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.975552ms grafana | logger=migrator t=2024-04-24T08:58:16.865928061Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-04-24T08:58:16.866395448Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=468.077µs grafana | logger=migrator t=2024-04-24T08:58:16.924124111Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-04-24T08:58:16.952121488Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=27.998037ms grafana | logger=migrator t=2024-04-24T08:58:16.983587792Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-04-24T08:58:16.984885053Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.298771ms grafana | logger=migrator t=2024-04-24T08:58:16.990509124Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-04-24T08:58:16.992525697Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.016373ms grafana | logger=migrator t=2024-04-24T08:58:16.997476198Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-04-24T08:58:17.00061538Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=3.138622ms grafana | logger=migrator t=2024-04-24T08:58:17.005456279Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-04-24T08:58:17.00674798Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.292391ms grafana | logger=migrator t=2024-04-24T08:58:17.009707728Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-04-24T08:58:17.011074901Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.366753ms grafana | logger=migrator t=2024-04-24T08:58:17.014477406Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-04-24T08:58:17.015655905Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.178119ms grafana | logger=migrator t=2024-04-24T08:58:17.019865064Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-04-24T08:58:17.021008113Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.142469ms grafana | logger=migrator t=2024-04-24T08:58:17.028324312Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-04-24T08:58:17.029678534Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.356482ms grafana | logger=migrator t=2024-04-24T08:58:17.037018384Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-04-24T08:58:17.037559123Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=541.959µs grafana | logger=migrator t=2024-04-24T08:58:17.043320447Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2024-04-24T08:58:17.043419169Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=99.792µs grafana | logger=migrator t=2024-04-24T08:58:17.049126102Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2024-04-24T08:58:17.058272311Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.144399ms grafana | logger=migrator t=2024-04-24T08:58:17.0625113Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2024-04-24T08:58:17.071589209Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.077548ms grafana | logger=migrator t=2024-04-24T08:58:17.077141509Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2024-04-24T08:58:17.077460285Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=318.386µs grafana | logger=migrator t=2024-04-24T08:58:17.081759375Z level=info msg="migrations completed" performed=548 skipped=0 duration=5.448562772s grafana | logger=sqlstore t=2024-04-24T08:58:17.094591464Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-04-24T08:58:17.094851169Z level=info msg="Created default organization" grafana | logger=secrets t=2024-04-24T08:58:17.099828Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-04-24T08:58:17.137222701Z level=info msg="Loading plugins..." grafana | logger=local.finder t=2024-04-24T08:58:17.182728464Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-04-24T08:58:17.182766324Z level=info msg="Plugins loaded" count=55 duration=45.544134ms grafana | logger=query_data t=2024-04-24T08:58:17.185865125Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-04-24T08:58:17.189593076Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-04-24T08:58:17.193653292Z level=info msg=Starting grafana | logger=ngalert.migration t=2024-04-24T08:58:17.193986137Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false grafana | logger=ngalert.migration orgID=1 t=2024-04-24T08:58:17.194286903Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-04-24T08:58:17.19474461Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration t=2024-04-24T08:58:17.19600324Z level=info msg="Completed alerting migration" grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.228622953Z level=info msg="Running in alternative execution of Error/NoData mode" grafana | logger=infra.usagestats.collector t=2024-04-24T08:58:17.231422809Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-04-24T08:58:17.234314686Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-04-24T08:58:17.264772023Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-04-24T08:58:17.264800874Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-04-24T08:58:17.264997327Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.266002303Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2024-04-24T08:58:17.266441801Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2024-04-24T08:58:17.269357878Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=sqlstore.transactions t=2024-04-24T08:58:17.276670477Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana.update.checker t=2024-04-24T08:58:17.494665388Z level=info msg="Update check succeeded" duration=225.504734ms grafana | logger=plugins.update.checker t=2024-04-24T08:58:17.495990968Z level=info msg="Update check succeeded" duration=226.62562ms grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.546103437Z level=info msg="State cache has been initialized" states=0 duration=280.096974ms grafana | logger=ngalert.scheduler t=2024-04-24T08:58:17.546133078Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 grafana | logger=ticker t=2024-04-24T08:58:17.546188658Z level=info msg=starting first_tick=2024-04-24T08:58:20Z grafana | logger=grafana-apiserver t=2024-04-24T08:58:17.550853295Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2024-04-24T08:58:17.551400084Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=provisioning.dashboard t=2024-04-24T08:58:17.657977424Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2024-04-24T08:58:17.77161602Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=provisioning.dashboard t=2024-04-24T08:58:17.95901796Z level=info msg="finished to provision dashboards" grafana | logger=infra.usagestats t=2024-04-24T08:59:43.27663479Z level=info msg="Usage stats are ready to report" ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping policy-api ... Stopping grafana ... Stopping simulator ... Stopping mariadb ... Stopping prometheus ... Stopping zookeeper ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing simulator ... Removing mariadb ... Removing prometheus ... Removing zookeeper ... Removing policy-apex-pdp ... done Removing policy-api ... done Removing simulator ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing grafana ... done Removing zookeeper ... done Removing prometheus ... done Removing policy-pap ... done Removing kafka ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.9nztubu5q5 ]] + rsync -av /tmp/tmp.9nztubu5q5/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,617 bytes received 95 bytes 1,837,424.00 bytes/sec total size is 918,075 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2087 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16316828185803153699.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10321740824680298229.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3435512097399306784.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4672491028009825115.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config4495636280831550086tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1678436478455647875.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4837189172700282969.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2886048813112688206.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins111671421306962699.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4994298349377037758.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1657 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-25485 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.996 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 841 25173 0 6152 30869 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:a2:4a:6c brd ff:ff:ff:ff:ff:ff inet 10.30.107.191/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85921sec preferred_lft 85921sec inet6 fe80::f816:3eff:fea2:4a6c/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:93:31:9d:bd brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25485) 04/24/24 _x86_64_ (8 CPU) 08:54:13 LINUX RESTART (8 CPU) 08:55:01 tps rtps wtps bread/s bwrtn/s 08:56:01 97.80 17.76 80.04 1024.23 27122.28 08:57:01 133.56 23.11 110.45 2777.00 33050.22 08:58:01 234.36 0.15 234.21 17.73 125697.72 08:59:01 336.53 12.18 324.35 790.60 49443.64 09:00:01 19.53 0.00 19.53 0.00 21116.93 09:01:01 22.23 0.08 22.14 9.60 19888.95 09:02:01 77.59 1.93 75.65 111.98 22762.37 Average: 131.65 7.89 123.76 675.86 42725.47 08:55:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:56:01 30123272 31706320 2815940 8.55 70636 1822760 1437564 4.23 864608 1658092 154256 08:57:01 28531480 31678444 4407732 13.38 108048 3294364 1397652 4.11 975648 3032952 1282392 08:58:01 25846808 31670976 7092404 21.53 140732 5811800 1489428 4.38 1015992 5548512 574836 08:59:01 23575860 29563080 9363352 28.43 156796 5939368 8873372 26.11 3299312 5454908 1700 09:00:01 23637836 29626120 9301376 28.24 156984 5939912 8835300 26.00 3239440 5453704 188 09:01:01 23682012 29696660 9257200 28.10 157380 5967628 8083732 23.78 3186620 5467668 380 09:02:01 25785340 31617284 7153872 21.72 159308 5800004 1511600 4.45 1298908 5311896 2484 Average: 25883230 30794126 7055982 21.42 135698 4939405 4518378 13.29 1982933 4561105 288034 08:55:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:56:01 lo 1.67 1.67 0.19 0.19 0.00 0.00 0.00 0.00 08:56:01 ens3 54.41 36.31 838.46 8.03 0.00 0.00 0.00 0.00 08:57:01 br-3de9c8a2e03c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:57:01 lo 7.13 7.13 0.67 0.67 0.00 0.00 0.00 0.00 08:57:01 ens3 329.10 208.78 6522.27 19.13 0.00 0.00 0.00 0.00 08:58:01 br-3de9c8a2e03c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:58:01 lo 6.33 6.33 0.65 0.65 0.00 0.00 0.00 0.00 08:58:01 ens3 867.96 475.70 24860.68 33.58 0.00 0.00 0.00 0.00 08:59:01 veth9ef653e 5.80 7.22 0.89 1.00 0.00 0.00 0.00 0.00 08:59:01 veth3781b3a 45.68 39.71 17.14 39.84 0.00 0.00 0.00 0.00 08:59:01 veth211ae86 0.55 0.93 0.06 0.31 0.00 0.00 0.00 0.00 08:59:01 br-3de9c8a2e03c 1.53 1.50 0.90 1.81 0.00 0.00 0.00 0.00 09:00:01 veth9ef653e 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:00:01 veth3781b3a 0.50 0.50 0.63 0.08 0.00 0.00 0.00 0.00 09:00:01 veth211ae86 0.23 0.18 0.02 0.01 0.00 0.00 0.00 0.00 09:00:01 br-3de9c8a2e03c 1.57 1.80 0.99 0.24 0.00 0.00 0.00 0.00 09:01:01 veth9ef653e 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 09:01:01 veth3781b3a 0.35 0.42 0.58 0.03 0.00 0.00 0.00 0.00 09:01:01 br-3de9c8a2e03c 1.15 1.45 0.10 0.14 0.00 0.00 0.00 0.00 09:01:01 veth54c7651 0.00 0.45 0.00 0.03 0.00 0.00 0.00 0.00 09:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:01 lo 35.44 35.44 6.27 6.27 0.00 0.00 0.00 0.00 09:02:01 ens3 1666.36 1014.00 33076.55 154.60 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 4.50 4.50 0.85 0.85 0.00 0.00 0.00 0.00 Average: ens3 189.69 111.19 4614.84 14.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25485) 04/24/24 _x86_64_ (8 CPU) 08:54:13 LINUX RESTART (8 CPU) 08:55:01 CPU %user %nice %system %iowait %steal %idle 08:56:01 all 9.85 0.00 0.69 3.23 0.03 86.20 08:56:01 0 0.77 0.00 0.40 14.65 0.03 84.15 08:56:01 1 6.74 0.00 0.43 0.08 0.02 92.73 08:56:01 2 3.86 0.00 0.33 0.38 0.02 95.42 08:56:01 3 10.06 0.00 0.58 0.53 0.02 88.81 08:56:01 4 25.61 0.00 1.18 1.22 0.07 71.92 08:56:01 5 9.03 0.00 0.72 0.25 0.02 89.99 08:56:01 6 5.63 0.00 0.57 0.15 0.00 93.65 08:56:01 7 17.20 0.00 1.32 8.63 0.07 72.77 08:57:01 all 11.50 0.00 2.42 2.73 0.04 83.30 08:57:01 0 4.46 0.00 2.72 11.94 0.03 80.85 08:57:01 1 16.78 0.00 2.59 1.45 0.05 79.13 08:57:01 2 11.10 0.00 2.32 0.25 0.05 86.28 08:57:01 3 14.45 0.00 2.72 3.13 0.07 79.63 08:57:01 4 12.20 0.00 1.95 0.15 0.03 85.67 08:57:01 5 4.61 0.00 1.56 0.67 0.02 93.15 08:57:01 6 17.05 0.00 2.50 2.28 0.05 78.12 08:57:01 7 11.35 0.00 3.00 1.98 0.03 83.64 08:58:01 all 9.27 0.00 4.06 10.15 0.07 76.45 08:58:01 0 7.90 0.00 3.16 0.86 0.05 88.03 08:58:01 1 9.61 0.00 4.54 6.14 0.07 79.64 08:58:01 2 11.42 0.00 4.12 0.22 0.07 84.18 08:58:01 3 7.12 0.00 4.99 11.83 0.03 76.03 08:58:01 4 8.12 0.00 3.61 41.78 0.07 46.42 08:58:01 5 10.77 0.00 4.07 0.91 0.07 84.18 08:58:01 6 9.52 0.00 4.53 15.34 0.14 70.47 08:58:01 7 9.74 0.00 3.44 4.13 0.07 82.62 08:59:01 all 27.23 0.00 3.79 5.12 0.13 63.74 08:59:01 0 30.07 0.00 4.43 3.61 0.10 61.79 08:59:01 1 22.15 0.00 3.28 2.22 0.12 72.23 08:59:01 2 29.34 0.00 3.68 6.34 0.12 60.52 08:59:01 3 29.01 0.00 4.14 2.99 0.12 63.74 08:59:01 4 28.70 0.00 4.02 7.20 0.27 59.81 08:59:01 5 26.39 0.00 3.67 3.76 0.10 66.08 08:59:01 6 21.94 0.00 3.49 13.34 0.10 61.14 08:59:01 7 30.22 0.00 3.56 1.52 0.10 64.60 09:00:01 all 3.90 0.00 0.39 1.18 0.06 94.46 09:00:01 0 5.13 0.00 0.43 9.04 0.08 85.32 09:00:01 1 3.90 0.00 0.33 0.00 0.07 95.70 09:00:01 2 3.37 0.00 0.32 0.10 0.05 96.16 09:00:01 3 3.99 0.00 0.40 0.02 0.03 95.56 09:00:01 4 4.04 0.00 0.45 0.10 0.05 95.36 09:00:01 5 2.97 0.00 0.28 0.05 0.05 96.65 09:00:01 6 4.74 0.00 0.50 0.15 0.05 94.55 09:00:01 7 3.09 0.00 0.45 0.00 0.08 96.37 09:01:01 all 1.28 0.00 0.32 2.66 0.05 95.69 09:01:01 0 0.90 0.00 0.28 19.34 0.03 79.44 09:01:01 1 1.02 0.00 0.22 0.58 0.08 98.10 09:01:01 2 1.30 0.00 0.35 0.27 0.02 98.07 09:01:01 3 3.17 0.00 0.38 0.48 0.10 95.86 09:01:01 4 1.03 0.00 0.25 0.02 0.05 98.65 09:01:01 5 0.92 0.00 0.28 0.07 0.03 98.70 09:01:01 6 0.80 0.00 0.40 0.00 0.05 98.75 09:01:01 7 1.10 0.00 0.37 0.50 0.05 97.98 09:02:01 all 6.77 0.00 0.67 2.84 0.04 89.67 09:02:01 0 2.62 0.00 0.57 0.20 0.03 96.58 09:02:01 1 6.84 0.00 0.72 2.44 0.03 89.97 09:02:01 2 6.59 0.00 0.62 0.82 0.03 91.94 09:02:01 3 2.22 0.00 0.53 12.60 0.03 84.61 09:02:01 4 5.10 0.00 0.50 0.23 0.03 94.13 09:02:01 5 9.48 0.00 0.69 1.61 0.03 88.19 09:02:01 6 14.27 0.00 0.84 3.60 0.07 81.23 09:02:01 7 7.08 0.00 0.87 1.24 0.07 90.75 Average: all 9.95 0.00 1.75 3.98 0.06 84.25 Average: 0 7.39 0.00 1.71 8.53 0.05 82.31 Average: 1 9.56 0.00 1.72 1.83 0.06 86.82 Average: 2 9.53 0.00 1.67 1.19 0.05 87.57 Average: 3 9.99 0.00 1.95 4.50 0.06 83.50 Average: 4 12.11 0.00 1.70 7.19 0.08 78.92 Average: 5 9.14 0.00 1.60 1.04 0.05 88.16 Average: 6 10.54 0.00 1.82 4.95 0.06 82.62 Average: 7 11.38 0.00 1.85 2.57 0.07 84.13